Skip to main content

29th Annual Computational Neuroscience Meeting: CNS*2020

K1 Deep reinforcement learning and its neuroscientific implications

Matthew Botvinick

Google DeepMind, Neuroscience Research, London, United Kingdom

Correspondence: Matthew Botvinick (botvinick@google.com)

BMC Neuroscience 2020, 21(Suppl 1):K1

The last few years have seen some dramatic developments in artificial intelligence research. What implications might these have for neuroscience? Investigations of this question have, to date, focused largely on deep neural networks trained using supervised learning, in tasks such as image classification. However, there is another area of recent AI work which has so far received less attention from neuroscientists, but which may have more profound neuroscientific implications: deep reinforcement learning. Deep RL offers a rich framework for studying the interplay among learning, representation and decision-making, offering to the brain sciences a new set of research tools and a wide range of novel hypotheses. I’ll provide a high-level introduction to deep RL, discuss some recent neuroscience-oriented investigations from my group at DeepMind, and survey some wider implications for research on brain and behavior.

K2 A new computational framework for understanding vision in our brain

Zhaoping Li

Max Planck Institute for Biological Cybernetics, Tübingen, Germany

Correspondence: Zhaoping Li (li.zhaoping@tuebingen.mpg.de)

BMC Neuroscience 2020, 21(Suppl 1):K2

Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts. This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by-synthesis, should query for additional information and be mainly directed at the foveal region. Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions.

K3 Information and decision-making

Daniel Polani

University of Hertfordshire, School of Computer Science, Hatfield, United Kingdom

Correspondence: Daniel Polani (d.polani@herts.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):K3

In recent years it has become increasingly clear that (Shannon) information is a central resource for organisms, akin in importance to energy. Any decision that an organism or a subsystem of an organism takes involves the acquisition, selection, and processing of information and ultimately its concentration and enaction. It is the consequences of this balance that will occupy us in this talk.

This perception-action loop picture of an agent’s life cycle is well established and expounded especially in the context of Fuster’s sensorimotor hierarchies. Nevertheless, the information-theoretic perspective drastically expands the potential and predictive power of the perception-action loop perspective.

On the one hand information can be treated - to a significant extent - as a resource that is being sought and utilized by an organism. On the other hand, unlike energy, information is not additive. The intrinsic structure and dynamics of information can be exceedingly complex and subtle; in the last two decades one has discovered that Shannon information possesses a rich and nontrivial intrinsic structure that must be taken into account when informational contributions, information flow or causal interactions of processes are investigated, whether in the brain or in other complex processes.

In addition, strong parallels between information and control theory have emerged. This parallelism between the theories allows one to obtain unexpected insights into the nature and properties of the perception-action loop. Through the lens of information theory, one can not only come up with novel hypotheses about necessary conditions for the organization of information processing in a brain, but also with constructive conjectures and predictions about what behaviours, brain structure and dynamics and even evolutionary pressures one can expect to operate on biological organisms, induced purely by informational considerations.

K4 Computational models of neural development

Geoffrey J. Goodhill

University of Queensland, Queensland Brain Institute; School of Mathematics and Physics, St Lucia, Australia

Correspondence: Geoffrey J Goodhill (g.goodhill@uq.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):K4

Unlike even the most sophisticated current forms of artificial intelligence, developing biological organisms must build their neural hardware from scratch. Furthermore they must start to evade predators and find food before this construction process is complete. I will discuss an interdisciplinary program of mathematical and experimental work which addresses some of the computational principles underlying neural development. This includes (i) how growing axons navigate to their targets by detecting and responding to molecular cues in their environment, (ii) the formation of maps in the visual cortex and how these are influenced by visual experience, and (iii) how patterns of neural activity in the zebrafish brain develop to facilitate precisely targeted hunting behaviour. Together this work contributes to our understanding of both normal neural development and the etiology of neurodevelopmental disorders.

F1 Delineating reward/avoidance decision process in the impulsive-compulsive spectrum disorders through a probabilistic reversal learning task

Xiaoliu Zhang1, Chao Suo2, Amir Dezfouli3, Ben J Harrison4, Leah Braganza2, Ben Fulcher2, Lenardo Fontenelle2, Carsten Murawski5, Murat Yucel2

1Monash University, Monash Biomedical Imaging, Melbourne, Australia; 2Monash University, BrainPark, Turner Institute for Brian and Mental Health, School of Psychological Science, Melbourne, Australia; 3Machine Learning Research Group, Data61, CSIRO, Sydney, Australia; 4University of Melbourne and Melbourne Health, Melbourne Neuropsychiatry Centre, Department of Psychiatry, Melbourne, Australia; 5University of Melbourne, Department of Finance, Melbourne, Australia

Correspondence: Xiaoliu Zhang (smile.in.sjtu@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):F1

Impulsivity and compulsivity are behavioural traits that underlie many aspects of decision-making and form the characteristic symptoms of Obsessive Compulsive Disorder (OCD) and Gambling Disorder (GD). The neural underpinnings of aspects of reward and avoidance learning under the expression of these traits and symptoms are only partially understood.

The present study combined behavioural modelling and neuroimaging technique to examine brain activity associated with critical phases of reward and loss processing in OCD and GD.

Forty-two healthy controls (HC), forty OCD and twenty-three GD participants were recruited in our study to complete a two-session reinforcement learning (RL) task featuring a “probability switch (PS)” with imaging scanning. Finally, 39 HC (20F/19M, 34 yrs±9.47), 28 OCD (14F/14M, 32.11 yrs±9.53) and 16 GD (4F/12M, 35.53yrs±12.20) were included with both behavioural and imaging data available. The functional imaging wasconducted by using 3.0-T SIEMENS MAGNETOM Skyra syngo MR D13C at Monash Biomedical Imaging. Each volume compromised 34 coronal slices of 3 mm thickness with 2000ms TR and 30ms TE. A total of 479 volumes were acquired for each participant in each session in an interleaved-ascending manner.

The standard Q-learning model was fitted to the observed behavioural data and the Bayesian model was used for the parameter estimation. Imaging analysis was conducted using SPM12 (Welcome Department of Imaging Neuroscience, London, United Kingdom) in the Matlab (R2015b) environment. The pre-processing commenced with the slice timing, realignment, normalization to MNI space according to T1-weighted image and smoothing with a 8 mm Gaussian kernel.

The frontostriatal brain circuit including the putamen and medial orbitofrontal (mOFC) were significantly more active in response to receiving reward and avoiding punishment compared to receiving an aversive outcome and missing reward at p < 0.001 with FEW correction at cluster level; While the right insula showed greater activation in response to missing rewards and receiving punishment. Compared to healthy participants, GD patients showed significantly lower activation in the left superior frontal and posterior cingulum at p < 0.001 for the gain omission.

The reward prediction error (PE) signal was found positively correlated with the activation at several clusters expanding across cortical and subcortical region including the striatum, cingulate, bilateral insula, thalamus and superior frontal at p < 0.001 with FWE correction at cluster level. The GD patients showed a trend of decreased reward PE response in the right precentral extending to left posterior cingulate compared to controls at p < 0.05 with FWE correction. The aversive PE signal was negatively correlated with brain activity in regions including bilateral thalamus, hippocampus, insula and striatum at p < 0.001 with FWE correction. Compared with the control group, GD group showed an increased aversive PE activation in the cluster encompassing right thalamus and right hippocampus, and also the right middle frontal extending to the right anterior cingulum at p < 0.005 with FWE correction.

Through the reversal learning task, the study provided further support of the dissociable brain circuits for distinct phases of reward and avoidance learning. Also, the OCD and GD are characterised by aberrant patterns of reward and avoidance processing.

F2 Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus

Andrea Navas-Olive, Liset M de la Prida

Cajal Institute, Madrid, Spain

Correspondence: Andrea Navas-Olive (acnavasolive@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):F2

The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types [1]. Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics [2,3]. Reconciling both views is critical to understand how the brain operates.

The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels [1,4,5]. Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown.

Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity [6]. First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that multidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data (Fig. 1).

Fig. 1
figure1

Conceptualization of mechanisms operating over intrinsic and synaptic factors to restrict neuronal firing of CA1 pyramidal cells during theta oscillations. Non-linear dendritic integration of multidimensional excitatory and inhibitory inputs results in segregated firing

Acknowledgments: Andrea Navas-Olive is supported by PhD Fellowship FPU17/03268.

References

  1. 1.

    Cembrowski MS, Spruston N. Heterogeneity within classical cell types is the rule: lessons from hippocampal pyramidal neurons. Nature Reviews Neuroscience. 2019; 20(4): 193-204.

  2. 2.

    Chaudhuri R, Gerçek B, Pandey B, Peyrache A, Fiete I. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nature Neuroscience. 2019; 22(9): 1512-1520.

  3. 3.

    Guo W, Zhang JJ, Newman JP, Wilson MA. Latent learning drives sleep-dependent plasticity in distinct CA1 subpopulations. bioRxiv. 2020; Preprint at: https://doi.org/10.1101/2020.02.27.967794

  4. 4.

    Bannister NJ, Larkman AU. Dendritic morphology of CA1 pyramidal neurones from the rat hippocampus: I. Branching patterns. Journal of Comparative Neurology. 1995; 360: 150–160.

  5. 5.

    Valero M, et al. Determinants of different deep and superficial CA1 pyramidal cell dynamics during sharp-wave ripples. Nature Neuroscience. 2015; 18(9): 1281-1290.

  6. 6.

    Navas-Olive A, et al. Multimodal determinants of phase-locked dynamics across deep-superficial hippocampal sublayers during theta oscillations. Nature Communications. 2020; 11(1): 1-4.

F3 Neuronal morphology imposes a tradeoff between stability, accuracy and efficiency of synaptic scaling

Adriano Bellotti, Saeed Aljaberi, Fulvio Forni, Timothy O’Leary

University of Cambridge, Department of Engineering, Cambridge, United Kingdom

Correspondence: Adriano Bellotti (adriano.bellotti@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):F3

Synaptic scaling is a homeostatic normalization mechanism that preserves relative synaptic strengths by adjusting them with a common factor. This multiplicative change is believed to be critical, since synaptic strengths are involved in learning and memory retention. Further, this homeostatic process is thought to be crucial for neuronal stability, playing a stabilizing role in otherwise runaway Hebbian plasticity [1-3]. Synaptic scaling requires a mechanism to sense total neuron activity and globally adjust synapses to achieve some activity set-point [4]. This process is relatively slow, which places limits on its ability to stabilize network activity [5]. Here we show that this slow response is inevitable in realistic neuronal morphologies. Furthermore, we reveal that global scaling can in fact be a source of instability unless responsiveness or scaling accuracy are sacrificed.

A neuron with tens of thousands of synapses must regulate its own excitability to compensate for changes in input. The time requirement for global feedback can introduce critical phase lags in a neuron’s response to perturbation. The severity of phase lag increases with neuron size. Further, a more expansive morphology worsens cell responsiveness and scaling accuracy, especially in distal regions of the neuron. Local pools of reserve receptors improve efficiency, potentiation, and scaling, but this comes at a cost. Trafficking large quantities of receptors requires time, exacerbating the phase lag and instability. Local homeostatic feedback mitigates instability, but this too comes at the cost of reducing scaling accuracy.

Realization of the phase lag instability requires a unified model of synaptic scaling, regulation, and transport. We present such a model with global and local feedback in realistic neuron morphologies (Fig. 1). This combined model shows that neurons face a tradeoff between stability, accuracy, and efficiency. Global feedback is required for synaptic scaling but favors either system stability or efficiency. Large receptor pools improve scaling accuracy in large morphologies but worsen both stability and efficiency. Local feedback improves the stability-efficiency tradeoff at the cost of scaling accuracy. This project introduces unexplored constraints on neuron size, morphology, and synaptic scaling that are weakened by an interplay between global and local feedback.

Fig. 1
figure2

Schematic representation of unified model of AMPA receptor transport, potentiation, and scaling with global and local feedback

Acknowledgements: The authors are supported by European Research Council Grant FLEXNEURO (716643), Abu Dhabi National Oil Company, NIH OxCam Scholars program and Gates Cambridge Trust.

References

  1. 1.

    Royer S, Paré D. Conservation of total synaptic weight through balanced synaptic depression and potentiation. Nature. 2003; 422(6931): 518-522.

  2. 2.

    Chen JY, et al. Heterosynaptic plasticity prevents runaway synaptic dynamics. Journal of Neuroscience. 2013; 33(40): 15915-15929.

  3. 3.

    Chistiakova M, Bannon NM, Chen JY, Bazhenov M, Volgushev M. Homeostatic role of heterosynaptic plasticity: models and experiments. Frontiers in Computational Neuroscience. 2015; 9: 89.

  4. 4.

    Turrigiano GG. The self-tuning neuron: synaptic scaling of excitatory synapses. Cell. 2008; 135(3): 422-435.

  5. 5.

    Zenke F, Hennequin G, Gerstner W. Synaptic plasticity in neural networks needs homeostasis with a fast rate detector. PLoS Computational Biology. 2013; 9(11).

F4 Who can turn faster? Comparison of the head direction circuit of two species

Ioannis Pisokas1, Stanley Heinze2, Barbara Webb1

1University of Edinburgh, School of Informatics, Edinburgh, United Kingdom; 2Lund University, Department of Biology, Lund, Sweden

Correspondence: Ioannis Pisokas (i.pisokas@sms.ed.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):F4

Ants, bees and other insects have the ability to return to their nest or hive using a navigation strategy known as path integration. Similarly, fruit flies employ path integration to return to a previously visited food source. An important component of path integration is the ability of the insect to keep track of its heading relative to salient visual cues. A highly conserved brain region known as the central complex has been identified as being of key importance for the computations required for an insect to keep track of its heading [1,2]. However, the similarities or differences of the underlying heading tracking circuit between species are not well understood. We sought to address this shortcoming by using reverse engineering techniques to derive the effective underlying neuronal circuits of two evolutionary distant species, the fruit fly and the locust. Our analysis revealed that regardless of the anatomical differences between the two species the essential circuit structure has not changed. Both effective neuronal circuits have the structural topology of a ring attractor with an eight-fold radial structure (Fig. 1). However, despite the strong similarities between the two ring attractors, there remain differences. Using computational modelling we found that two apparently small anatomical differences have significant functional effect on the ability of the two circuits to track fast rotational movements and to maintain a stable heading signal [6]. In particular, the fruit fly circuit responds faster to abrupt heading changes of the animal while the locust circuit maintains a heading signal that is more robust to inhomogeneities in cell membrane properties and synaptic weights. We suggest that the effects of these differences are consistent with the behavioural ecology of the two species. On the one hand, the faster response of the ring attractor circuit in the fruit fly accommodates the fast body saccades that fruit flies are known to perform. On the other hand, the locust is a migratory species, so its behaviour demands maintenance of a defined heading for a long period of time. Our results highlight that even seemingly small differences in the distribution of dendritic fibres can have a significant effect on the dynamics of the effective ring attractor circuit with consequences for the behavioural capabilities of each species. These differences, emerging from morphologically distinct single neurons highlight the importance of a comparative approach to neuroscience.

Fig. 1
figure3

A-F Schematics and example neuron anatomy of the fruit fly and the desert locust (Reproduced with permission from [3,4,5]). G, H Effective circuit in the fruit fly and the locust, respectively. I Maximum rate of heading change sustained by each model for different magnitudes of heading change. J Ring attractor stability as function of noise in the cell membrane parameters

References

  1. 1.

    Neuser K, Triphan T, Mronz M, Poeck B, Strauss R. Analysis of a spatial orientation memory in Drosophila. Nature. 2008; 453(7199): 1244–1247.

  2. 2.

    Pfeiffer K, Homberg U. Organization and functional roles of the central complex in the insect brain. Annual Review of Entomology. 2013; 59(1):165–184.

  3. 3.

    Wolff T, Rubin G M. Neuroarchitecture of the Drosophila central complex: a catalog of nodulus and asymmetrical body neurons and a revision of the protocerebral bridge catalog. Journal of Comparative Neurology. 2018; 526(16): 2585–2611.

  4. 4.

    Vitzthum H, Homberg U. Immunocytochemical demonstration of locustatachykinin-related peptides in the central complex of the locust brain. Journal of Comparative Neurology. 1998; 390(4): 455–469.

  5. 5.

    Beetz MJ, el Jundi B, Heinze S, Homberg U. Topographic organization and possible function of the posterior optic tubercles in the brain of the desert locust Schistocerca gregaria. Journal of Comparative Neurology. 2015; 523(11): 1589–1607.

  6. 6.

    Pisokas I, Heinze S, Webb B. The head direction circuit of two insect species. bioRxiv 2019.

O1 Dopamine role in learning and action inference

Rafal Bogacz

University of Oxford, MRC Brain Network Dynamics Unit, Oxford, United Kingdom

Correspondence: Rafal Bogacz (rafal.bogacz@ndcn.ox.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1): O1

Much evidence suggests that some dopaminergic neurons respond to unexpected rewards, and computational models have suggested that these neurons encode reward prediction error, which drives learning about rewards. However, these models do not explain recently observed diversity of dopaminergic responses, and dopamine function in action planning, evident from movement difficulties in Parkinson’s disease. The presented work aims at extending existing models to account for these data. It proposes that a more complete description of dopaminergic activity can be achieved by combining reinforcement learning with elements of other recently proposed theories including active inference.

The presented model describes how the basal ganglia network infers actions required to obtained reward using Bayesian inference. The model assumes that a likelihood of reward given action in encoded by the goal-directed system, while the prior probability of making a particular action in a given context is provided by the habit system. It is shown how the inference of the optimal action can be achieved through minimization of free-energy, and how this inference can be implemented in a network with an architecture bearing a striking resemblance to the known anatomy of the striato-dopaminergic circuit. In particular, this network includes nodes encoding prediction errors, which are connected with other nodes in the network in a way resembling the “ascending spiral” structure of dopaminergic connections.

In the proposed model, dopaminergic neurons projecting to different parts of the striatum encode errors in predictions made by the corresponding systems within the basal ganglia. These prediction errors are equal to differences between rewards and expectations in the goal-directed system, and to differences between the chosen and habitual actions in the habit system. The prediction errors enable learning about rewards resulting from actions and habit formation. During action planning, the expectation of reward in the goal-directed system arises from formulating a plan to obtain that reward. Thus dopaminergic neurons in this system provide feedback on whether the current motor plan is sufficient to obtain the available reward, and they facilitate action planning until a suitable plan is found. Presented models account for dopaminergic responses during movements, effects of dopamine depletion on behaviour, and make several experimental predictions.

O2 Neural manifold models for characterising brain circuit dynamics in neurodegenerative disease

Seigfred Prado1, Simon R. Schultz2, Mary A. Go1

1Imperial College London, Department of Bioengineering, London, United Kingdom; 2Imperial College London, London, United Kingdom

Correspondence: Seigfred Prado (s.prado17@imperial.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1): O2

Although much is known about neural circuits and molecular pathways required for normal hippocampal functions, the processes by which neurodegenerative diseases, such as Alzheimer’s Disease (AD), disable the functioning of the hippocampus and connected structures remain to be determined. In order to make substantial advances in the treatment of such diseases, we must improve our understanding of how neural circuits process information and how they are disrupted during the progression of these diseases. Recent advances in optical imaging technologies that allow simultaneous recording of large populations of neurons in deeper structures [1] have shown great promise for revealing circuit dynamics during memory tasks [2]. However, to date, no study has revealed how large numbers of neurons in hippocampal-cortical circuits act together to encode, store and retrieve memories in animal models of AD. In this work, we explored the use of neural manifold analysis techniques to characterising brain circuit dynamics in neurodegenerative disease. To understand more precisely the basis of memory and cognitive impairments in AD, we extracted the underlying neural manifolds in large-scale neural responses of hippocampal circuits involved in spatial cognition of behaving mice. For validation, we simulated a model that generates a set of data that mimics the neural activity of hippocampal cells of mouse models running on a linear circular track, while taking into account the effects of amyloid-beta plaques on circuit dynamics [3]. We compare our model with real data obtained by multiphoton imaging of hippocampal CA1 cells in mice engaged in a spatial memory task. We used recurrence analysis to show how neural manifolds evolve over time during memory encoding, storage and recall processes in a repetitive memory task. This work will help with understanding how amyloid-beta proteins affect the neural manifolds for spatial memory, which is particularly disturbed during AD.

Fig. 1
figure4

Analyses for a mouse during a 4-minute recording session. a Cell activity and mouse position along the linear circular track. b Place map showing neuronal tuning curves. c Cumulative fraction of variance explained by manifolds of increasing dimensionality. d-f The distribution of data points in the reduced dimensional space using MDS, PCA and LEM, respectively. g-i Recurrence plots of d-f

References

  1. 1.

    Schultz SR, Copeland CS, Foust AJ, Quicke P, Schuck R. Advances in two-photon scanning and scanless microscopy technologies for functional neural circuit imaging. Proceedings of the IEEE. 2016; 105(1): 139-157.

  2. 2.

    Low RJ, Lewallen S, Aronov D, Nevers R, Tank DW. Probing variability in a cognitive map using manifold inference from neural dynamics. bioRxiv. 2018; 418939.

  3. 3.

    Busche MA, et al. Clusters of hyperactive neurons near amyloid plaques in a mouse model of Alzheimer’s disease. Science. 2008; 321(5896): 1686-1689.

O3 Coupled experimental and modeling representation of the mechanisms of epileptic discharges in rat brain slices

Anton Chizhov1, Dmytry Amakhin2, Elena Smirnova2, Aleksey Zaitsev2

1Ioffe Institute, Saint-Petersburg, Russia; 2Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Laboratory of Molecular Mechanisms of Neural Interactions, Saint Petersburg, Russia

Correspondence: Anton Chizhov (anton.chizhov@mail.ioffe.ru)

BMC Neuroscience 2020, 21(Suppl 1):O3

Epileptic seizures and interictal discharges (IIDs) are determined by neuronal interactions and ionic dynamics and thus help to reveal valuable knowledge about the mechanisms of brain functioning in not only pathological but also normal state. As synchronized pathological discharges are much simpler to study than normal functioning, we were able to accomplish their description with a set of electrophysiological evidences constrained by a biophysical mathematical model. In the combined hippocampal-entorhynal cortex slices of rat in high potassium, low magnesium and 4-AP containing solution we evaluated separate AMPA, NMDA and GABA-A conductances for different types of IIDs, using an original experimental technique [1]. The conductances have shown that the first type of the discharges (IID1) is determined by activity of only GABA-A channels due to their pathologically depolarized reversal potential. The second type (IID2) is determined by an early GABA-A followed by AMPA and NMDA components. The third type is pure glutamatergic discharges observed in case of disinhibition. Our mathematical model of interacting neuronal populations reproduces the recorded synaptic currents and conductances for IIDs of the three types [2,3], confirming the major role of interneuron synchronization for IID1 and IID2, and revealing that the duration of IIDs is determined mainly by synaptic depression. IIDs occur spontaneously and propagate as waves with a speed of about a few tens of mm/s [4]. IDs are clusters of IID-like discharges and are determined by the ionic dynamics [5]. To reveal only major processes, main ions and variables, we have formulated a reduced mathematical model “Epileptor-2”, which is a minimal model that reproduces both IDs and IIDs [6] (Fig. 1). It shows that IIDs are spontaneous bursts that are governed by the membrane depolarization and synaptic resource, whereas IDs represent bursts of bursts. Important is the role of the Na/K-ATPhase. Potassium accumulation governs the onset of each ID. The sodium accumulates during the ID and activates the sodium-potassium pump, which terminates the ID by restoring the potassium gradient and thus repolarizing the neurons. A spatially-distributed version of the Epileptor-2 model reveals that it is not extracellular potassium diffusion but synaptic connectivity determines the speed of the ictal wavefront [7], which is consistent with our optogenetic experiments. The revealed factors are to be potential targets for antiepileptic medical treatment.

Fig. 1
figure5

Minimal model Epileptor-2 reproduces ictal events observed in slices. Intracellular sodium and extracellular potassium concentrations and membrane potential of a representative neuron are shown

Acknowledgments: This work was supported by the Russian Science Foundation (project 16-15-10201).

References

  1. 1.

    Amakhin DV, Ergina JL, Chizhov AV, Zaitsev AV. Synaptic conductances during interictal discharges in pyramidal neurons of rat entorhinal cortex. Frontiers in Cellular Neuroscience. 2016; 10: 233.

  2. 2.

    Chizhov A, Amakhin D, Zaitsev A. Computational model of interictal discharges triggered by interneurons. PLoS ONE. 2017; 12(10): e0185752.

  3. 3.

    Chizhov AV, Amakhin DV, Zaizev AV, Magazanik LG. AMPAR-mediated Interictal Discharges in Neurons of Entorhinal Cortex: Experiment and Model. Doklady Biological Sciences. 2018; 479(1): 47-50.

  4. 4.

    Chizhov AV, Amakhin DV, Zaitsev AV. Spatial propagation of interictal discharges along the cortex. Biochemical and Biophysical Research Communications. 2019; 508(4): 1245-1251.

  5. 5.

    Chizhov AV, Amakhin DV, Zaitsev AV. Mathematical model of Na-K-Cl homeostasis in ictal and interictal discharges. PLoS ONE. 2019; 14(3): e0213904.

  6. 6.

    Chizhov AV, Zefirov AV, Amakhin DV, Smirnova EY, Zaitsev AV. Minimal model of interictal and ictal discharges “Epileptor-2”. PLoS Computational Biology. 2018; 14(5): e1006186.

  7. 7.

    Chizhov AV, Sanin AE. A simple model of epileptic seizure propagation: Potassium diffusion versus axo-dendritic spread. PLoS ONE. 2020; 15(4): e0230787.

O4 Towards multipurpose bio-realistic models of cortical circuits

Anton Arkhipov, Yazan N Billeh, Binghuang Cai, Sergey L Gratiy, Kael Dai, Ramakrishnan Iyer, Nathan W Gouwens, Reza Abbasi-Asl, Xiaoxuan Jia, Joshua H Siegle, Shawn R Olsen, Stefan Mihalas, Christof Koch

Allen Institute for Brain Science, Seattle, Washington State, United States of America

Correspondence: Anton Arkhipov (antona@alleninstitute.org)

BMC Neuroscience 2020, 21(Suppl 1): O4

One of the central questions in neuroscience is how structure of brain circuits determines their activity and function. To explore such structure-function relations systematically, we integrate information from large-scale experimental surveys into data-driven, bio-realistic models of brain circuits, with the current focus on the mouse cortex.

Our 230,000-neuron models of the mouse cortical area V1 [1] were constructed at two levels of granularity—using either biophysically-detailed or point-neurons. These models systematically integrated a broad array of experimental data [1-3]: the information about distribution and morpho-electric properties of different neuron types in V1; connection probabilities, synaptic weights, axonal delays, and dendritic targeting rules inferred from a thorough survey of the literature; and a sophisticated representation of visual inputs into V1 from the Lateral Geniculate Nucleus, fit to in vivo recordings. The model activity has been tested against large-scale in vivo recordings of neural activity [4]. We found a good agreement between these experimental data and the V1 models for a variety of metrics, such as direction selectivity, as well as less good agreement for other metrics, suggesting avenues for future improvements. In the process of building and testing models, we also made predictions about the logic of recurrent connectivity with respect to functional properties of the neurons, some of which have been verified experimentally [1].

In this presentation, we will focus on the model’s successes in quantitative matching of multiple experimental measures, as well as failures in matching other metrics. Both successes and failures shed light on the potential structure-function relations in cortical circuits, leading to experimentally testable hypotheses. Our models are shared freely with the community: https://portal.brain-map.org/explore/models/mv1-all-layers. We also freely share our software tools – the Brain Modeling ToolKit (BMTK; alleninstitute.github.io/bmtk/), which is a software suite for model building/simulation [5], and the SONATA [6] file format (github.com/allenInstitute/sonata).

References

  1. 1.

    Billeh YN, et al. Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex. Neuron. 2020.

  2. 2.

    Gouwens NW, et al. Classification of electrophysiological and morphological neuron types in the mouse visual cortex. Nature neuroscience. 2019; 22(7): 1182-1195.

  3. 3.

    Gouwens NW, et al. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nature communications. 2018; 9(1): 1-13.

  4. 4.

    Siegle JH, et al. A survey of spiking activity reveals a functional hierarchy of mouse corticothalamic visual areas. bioRxiv. 2019; 805010 [preprint].

  5. 5.

    Gratiy SL, et al. BioNet: A Python interface to NEURON for modelling large-scale networks. PLoS One. 2019; 13(8): e0201630.

  6. 6.

    Dai K, et al. The SONATA data format for efficient description of large-scale network models. PLoS Computational Biology. 2020; 16(2): e1007696.

O5 How stimulus statistics affect the receptive fields of cells in primary visual cortex

Ali Almasi1, Hamish Meffin2, Shi Sun3, Michael R Ibbotson4

1National Vision Research Institute, Melbourne, Australia; 2University of Melbourne, Biomedical Engineering, Melbourne, Australia; 3University of Melbourne, Optometry and Vision Sciences, Melbourne, Australia; 4Australian College of Optometry, The National Vision Research Institute, Carlton, Australia

Correspondence: Ali Almasi (aalmasi@aco.org.au)

BMC Neuroscience 2020, 21(Suppl 1): O5

Our understanding of sensory coding in the visual system is largely derived from parametrizing neuronal responses to basic stimuli. Recently, mathematical tools have developed that allow estimating the parameters of a receptive field (RF) model, which are typically a cascade of linear filters on the stimulus, followed by static nonlinearities that map the output of the filters to the neuronal spike rates. However, how much do these characterizations depend on the choice of the stimulus type?

We studied the changes that neuronal RF models undergo due to the change in the statistics of the visual stimulus. We applied the nonlinear input model (NIM) [1] to the recordings of single units in cat primary visual cortex (V1) in response to white Gaussian noise (WGN) and natural scenes (NS). These two stimulus types were matched in their global RMS contrast; however, they are fundamentally different in terms of second- and higher-order statistics, which are abundant in natural scenes but do not exist in white noise. We estimated for each cell the spatial filters constituting the neuronal RF and their corresponding nonlinear pooling mechanism, while making minimal assumptions about the underlying neuronal processing.

We found that cells respond differently to these two stimulus types, with mostly higher spike rates and shorter response latencies to NS than to WGN. The most striking finding was that NS stimuli resulted in around twice as many uncovered RF filters compared to using WGN stimuli. Via careful analysis of the data, we discovered that this difference between the number of identified RF filters is not related to the higher spike rates of cells to NS stimuli. Instead, we found it to be attributed to the difference in the contrast levels of specific features that exhibit different prevalence in NS versus WGN. These features correspond to the V1 RF filters recovered in the model. This specific feature-contrast attains much higher values in NS compared to WGN stimuli. When the feature-contrast is controlled for, it explains the differences in the number of RF filters obtained. Our findings imply that a greater extent of nonlinear processing in V1 neurons can be uncovered using natural scene stimulation.

We also compared the identified RF filters under the two stimulation regimes in terms of their spatial characteristics. Population analysis of the RF filters revealed a statistically significant bias towards higher spatial frequency filters with narrower spatial frequency bandwidth under the NS stimulation regime (p-value < 0.0025).

Acknowledgements: The authors acknowledge the support the Australian Research Council Centre of Excellence for Integrative Brain function (CE140100007), the National Health and Medical Research Council (GNT1106390), and Lions Club of Victoria.

References

  1. 1.

    McFarland JM, Cui Y, Butts DA. Inferring nonlinear neuronal computation based on physiologically plausible inputs. PLoS Computational Biology. 2013; 9(7).

O6 Analysis and modelling of response features of accessory olfactory bulb neurons

Yoram Ben-Shaul1, Rohini Bansal1, Romana Stopkova2, Maximilian Nagel3, Pavel Stopka2, Marc Spehr3

1The Hebrew University, Medical Neurobiology, Jerusalem, Israel; 2Charles University Prague, Zoology, Prague, Czechia; 3RWTH Aachen University, Chemosensation, Aachen, Germany

Correspondence: Yoram Ben-Shaul (yoramb@ekmd.huji.ac.il)

BMC Neuroscience 2020, 21(Suppl 1):O6

The broad goal of this work is to understand how consistency on a macroscopic scale can be achieved despite random connectivity at the level of individual neurons.

A central aspect of any sensory system is the manner by which features of the external world are represented by neurons at various processing stages. Yet, it is not always clear what these features are, how they are represented, and how they emerge mechanistically. Here, we investigate this issue in the context of the vomeronasal system (VNS), a vertebrate chemosensory system specialized for processing of cues from other organisms. We focus on the accessory olfactory bulb AOB, which receives all vomeronasal sensory neuron inputs. Unlike the main olfactory system, where MTCs sample information from a single receptor type, AOB MTCs sample information from a variable number of glomeruli, in a manner that seems largely random. This apparently random connectivity is puzzling given the presumed role of this system in processing cues with innately relevant significance.

We use multisite extracellular recordings to measure the responses of mouse AOB MTCs to controlled presentation of natural urine stimuli from male and female mice from various strains, including from wild mice. Crucially, we also measured the levels of both volatile and peptide chemical components in the very same stimulus samples that were presented to the mice. As subjects, we used two genetically distinct mouse strains, allowing us to test if macroscopic similarity can emerge despite variability at the level of receptor expression.

First, we then explored neuronal receptive fields, and found that neurons selective for specific strains (regardless of sex), or a specific sex (regardless of strain), are less common than expected by chance. This is consistent with our previous findings indicating that high level stimulus features are represented in a distributed manner in the AOB. We then compared various aspects of neuronal responses across the two strains, and found a high degree of correlation among them, suggesting that despite apparent randomness and strain specific genetic aspects, consistent features emerge at the level of the AOB.

Next, we set out to model the responses of AOB neurons. Briefly, AOB responses to a given stimulus are modelled as dot products of random tuning profiles to specific chemicals and the actual level of those chemicals in the stimulus. In this manner we derive a population of AOB responses, which we can then compare to the measured responses. Our analysis thus far reveals several important insights. First, neuronal response properties are best accounted for by sampling of protein/peptide components, but not by volatile urinary components. This is consistent with the known physiology of the VNS. Second, several response features (population level neuronal distances, sparseness, distribution of receptive field types) are best reproduced in the model with random sampling of multiple, rather than single molecules per neuron. This suggests that the sampling mode of AOB neurons may mitigate some of the consequences of random sampling. Finally, we note that random sampling of molecules provides a reasonable fit for some, but not all metrics of the observed responses. Our ongoing work aims to identify which changes must be made to our initial simplistic model in order to account for these features.

Acknowledgement: This work is funded by GIF and DFG grants to Marc Spehr and Yoram Ben-Shaul.

O7 ‘Awake delta’ and theta-rhythmic modes of hippocampal network activity track intermittent locomotor behaviors in rats

Nathan Schultheiss1, Tomas Guilarte2, Tim Allen1

1Florida International University, Psychology, Miami, United States of America; 2Florida International University, Robert Stempel College of Public Health and Social Work & Environmental Health Sciences, Miami, United States of America

Correspondence: Nathan Schultheiss (nschulth@fiu.edu)

BMC Neuroscience 2020, 21(Suppl 1):O7

Delta-frequency activity in the local field potential (LFP) is widely believed to correspond to so-called ‘cortical silence’ during phases of non-REM sleep, but delta in awake behaving animals is not well understood and is rarely studied in detail. By integrating novel analyses of the hippocampal (HC) LFP with simultaneous behavioral tracking, we show for the first time that HC synchronization in the delta frequency band (1-4 Hz) is related to animals’ locomotor behaviors during free exploration and foraging in an open field environment. In contrast to well-established relationships between animals’ running speeds and the theta rhythm (6-10 Hz), we found that delta was most prominent when animals were stationary or moving slowly (i.e. when theta and fast gamma (65-120 Hz) were weak). Furthermore, delta synchronization often developed rapidly when animals paused briefly between intermittent running bouts.

Next, we developed an innovative strategy for identifying putative modes of network function based on the spectral content of the LFP. By applying hierarchical clustering algorithms to time-windowed power spectra throughout behavioral sessions (i.e. the spectrogram), we categorized moment-by-moment estimations of the power spectral density (PSD) into spectral modes of HC activity. That is, we operationalized putative functional modes of network computation as spectral modes of LFP activity. Delta and theta power were strikingly orthogonal across the resultant spectral modes, suggesting the possibility that delta- and theta-dominated hippocampal activity patterns represent distinct modes of HC function during navigation. Delta and theta were also remarkably orthogonal across precisely-defined bouts of running and stationary behavior, indicating that the stops-and-starts that compose rats’ locomotor trajectories are accompanied by alternating delta- and theta-dominated HC states.

We then asked whether the incidence of delta and theta modes was related to the coherence between recording sites in hippocampus or between hippocampus and medial prefrontal cortex (mPFC). We found that intrahippocampal coherences in both the delta-band and the theta-band were monotonically related to theta-delta ratios across modes. Furthermore, in two rats implanted with dual-site recording arrays, we found that theta coherence between HC and mPFC increased during running, and delta-band coherence between mPFC and HC increased during stationary bouts. Taken together, our findings suggest that delta-dominated network modes (and corresponding mPFC-HC couplings) represent functionally-distinct circuit dynamics that are temporally and behaviorally interspersed among theta-dominated modes during spatial navigation. As such, delta modes could play a fundamental role in coordinating mnemonic functions including encoding and retrieval mechanisms, or decision-making processes incorporating prospective or retrospective representations of experience, at a timescale that segments event sequences within behavioral episodes.

O8 Finite element simulation of ionic electrodiffusion in cellular geometries

Ada J Ellingsrud

Simula Research Laboratory, Scientific Computing and Numerical Analysis, Oslo, Norway

Correspondence: Ada J Ellingsrud (ada@simula.no)

BMC Neuroscience 2020, 21(Suppl 1):O8

Electrical conduction in brain tissue is commonly modeled using classical bidomain models. These models fundamentally assume that the discrete nature of brain tissue can be represented by homogenized equations where the extracellular space, the cell membrane, and the intracellular spare are continuous and exist everywhere. Consequently, they do not allow simulations highlighting the effect of a nonuniform distribution of ion channels along the cell membrane or the complex morphology of the cells. In this talk, we present a more accurate framework for cerebral electrodiffusion with an explicit representation of the geometry of the cell, the cell membrane and the extracellular space. To take full advantage of this framework, a numerical solution scheme capable of efficiently handling three-dimensional, complicated geometries is required. We propose a novel numerical solution scheme using a mortar finite element method, allowing for the coupling of variational problems posed over the non-overlapping intra and extracellular domains by weakly enforcing interface conditions on the cell membrane. This solution algorithm flexibly allows for arbitrary geometries and efficient solution of the separate subproblems. Finally, we study ephaptic coupling induced in an unmyelinated axon bundle and demonstrate how the presented framework can give new insights in this setting. Simulations of 9 idealized, tightly packed axons show that inducing action potentials in one or more axons yields ephaptic currents that have a pronounced excitatory effect on neighboring axons, but fail to induce action potentials there [1].

Acknowledgements: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement 714892 (Waterscales), and from the Research Council of Norway (BIOTEK2021 Digital Life project ‘DigiBrain’, project 248828).

References

  1. 1.

    Ellingsrud AJ, Solbrå A, Einevoll GT, Halnes G, Rognes ME. Finite element simulation of ionic electrodiffusion in cellular geometries. Frontiers in Neuroinformatics. 2020; 14: 11.

O9 Discovering synaptic mechanisms underlying the propagation of cortical activity: a model-driven experimental and data analysis approach

Heidi Teppola1, Jugoslava Acimovic1, Marja-Leena Linne2

1Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland; 2Tampere University, Medicine and Health Sciences, Tampere, Finland

Correspondence: Ada J Ellingsrud (heidi.teppola@tuni.fi)

BMC Neuroscience 2020, 21(Suppl 1):O9

Spontaneous, synchronized activity is a well-established feature of cortical networks in vitro and in vivo. The landmark of this activity is the repetitive emergence of bursts propagating across networks as spatio-temporal patterns. Cortical bursts are governed by excitatory and inhibitory synapses via AMPA, NMDA and GABAA receptors. Although spontaneous activity is a well-known phenomenon in developing networks, its specific underlying mechanisms in health and disease are not fully understood. In order to study the synaptic mechanisms regulating the propagation of cortical activity it is important to combine the experimental wet-lab studies with in silico modeling and build detailed, realistic, computational models of cortical network activity. Moreover, experimental studies and analysis of microelectrode array (MEA) data are not typically designed to support computational modeling. We show here how the synaptic AMPA, NMDA and GABAA receptors shape the initiation, propagation and termination of the cortical burst activity in rodent networks in vitro and in silico and develop model-driven data analysis workflow to support the development of spiking and biophysical network models in silico [1].

We created a model-driven data analysis workflow with multiple steps to examine the contributions of synaptic receptors to burst dynamics both in vitro and in silico neuronal networks (Fig. 1). First, the cortical networks were prepared from the forebrains of the postnatal rats and maintained on MEA plates. Second, network-wide activity was recorded by MEA technique under several pharmacological conditions of receptor antagonists. Third, multivariate data analysis was conducted in a way that supports both neurobiological questions as well as the fitting and validation of computational models to quantitatively produce the experimental results. Fourth, the computational models were simulated with different parameters to test putative mechanisms responsible for network activity.

Fig. 1
figure6

Model-driven data analysis workflow for discovering synaptic mechanisms underlying the propagation of cortical activity in vitro and in silico

The experimental results obtained in this study show that AMPA receptors initiate bursts by rapidly recruiting cells whereas NMDA receptors maintain them. GABAA receptors inhibit the spiking frequency of AMPA receptor-mediated spikes at the onset of bursts and attenuate the NMDA receptor-mediated late phase. These findings highlight the importance of both excitatory and inhibitory synapses in activity propagation and demonstrate a specific interaction between AMPA and GABAA receptors for fast excitation and inhibition. In the presence of this interaction, the spatio-temporal propagation patterns of activity are richer and more diverse than in its absence. Moreover, we emphasize the systematic data analysis approach with model-driven workflow throughout the study for comparison of results obtained from multiple in vitro networks and for validation of data-driven model development in silico. A well-defined workflow can reduce the amount of biological experiments, promote more reliable and efficient use of the MEA technique, and improve the reproducibility of research. It helps reveal in detail how excitatory and inhibitory synapses shape cortical activity propagation and dynamics in rodent networks in vitro and in silico.

References

  1. 1.

    Teppola H, Aćimović J, Linne M-L. Unique features of network bursts emerge from the complex interplay of excitatory and inhibitory receptors in rat neocortical networks. Frontiers in Cellular Neuroscience. 2019; 13(377): 1-22.

O10 Neural flows: estimation of wave velocities and identification of singularities in 3D+t brain data

Paula Sanz-Leon1, Leonardo L Gollo2, James A Roberts3

1University of Sydney, QIMR Berghofer, Brisbane, Australia; 2QIMR Berghofer/ Monash University, Melbourne, Australia; 3QIMR Berghofer, Computational Biology, Brisbane, Australia

Correspondence: Matthew Botvinick (pmsl.academic@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):O10

Background: Neural activity organizes in constantly evolving spatiotemporal patterns of activity, also known as brain waves [1]. Indeed, wave-like patterns have been observed across multiple neuroimaging modalities and across multiple spatiotemporal scales [2-4]. However, due to experimental constraints most attention has thus far been given to localised wave dynamics in the range of micrometers to a few centimeters, rather than at the global or large-scale that would encompass the whole brain. Existing toolboxes [2,4] are geared particularly for 2D spatial domains (e.g., LFPs or VSDs on structured rectangular grids). No tool exists to study spatiotemporal waves naturally unfolding in 3D+t as recorded with different non-invasive neuroimaging techniques (e.g, EEG, MEG, and fMRI). In this work, we present results of using our toolbox neural flows (Fig. 1).

Fig. 1
figure7

The toolbox has five main capabilities. In the core module: interpolation; 1 estimation of flows; 2 detection of singularities; and, in the analysis module: 3 classification, 4 quantification and tracking of singularities; and, 5 classification of large-scale patterns with modal decomposition

Methods and Results: Our toolbox handles irregularly sampled data such as those produced via brain network modelling [5,6] or source-reconstructed M/EEG, and regularly sampled data such as voxel-based fMRI. The toolbox performs the following steps: 1) Estimation of neural flows [5,7]. 2) Detection of 3D singularities (i.e., points of vanishing flow). 3) Classification of 3D singularities. In that regard, the key flow singularities detected so far had been sources and sinks (from where activity emerges and vanishes, respectively), but no methods or tools existed to detect 3D saddles (around which activity is redirected to other parts of the brain). 4) Quantification of singularity statistics. 5) Finally, modal decomposition of neural flow dynamics. This decomposition allows for the detection and prediction of the most common spatiotemporal patterns of activity found in empirical data.

Conclusions: Representation of neural activity based on singularities (commonly known as critical points) is essentially a dimensionality reduction framework to understand large-scale brain dynamics. The distribution of singularities in physical space allows us to simplify the complex structure of flows into areas with similar dynamical behavior (e.g., fast versus slow, stagnant, laminar, or rotating). For modelling work, this compact representation allows for an intuitive and systematic understanding of the effects of various parameters in brain network dynamics such as spatial heterogeneity, lesions and noise. For experimental work, neural flows enable a rational understanding of large-scale brain dynamics directly in anatomical space which facilitates the interpretation and comparison of results across multiple modalities. Toolbox capabilities are presented in the accompanying figure. Watch this space for the open-source code: https://github.com/brain-modelling-group.

References

  1. 1.

    Roberts JA, et al. Metastable brain waves. Nature Communications. 2019; 10(1): 1-17.

  2. 2.

    Muller L, et al. Rotating waves during human sleep spindles organize global patterns of activity that repeat precisely through the night. eLife. 2016; 5: e17267.

  3. 3.

    Contreras D, Destexhe A, Sejnowski TJ, Steriade M. Spatiotemporal patterns of spindle oscillations in cortex and thalamus. Journal of Neuroscience. 1997; 17: 1179-1196.

  4. 4.

    Destexhe A, Contreras D, Steriade M. Spatiotemporal analysis of local field potentials and unit discharges in cat cerebral cortex during natural wake and sleep states. Journal of Neuroscience. 1999; 19(11), 4595-4608.

  5. 5.

    Sanz-Leon, et al. Neuroimage toolbox 2020 [in prep].

  6. 6.

    Breakspear M. Dynamic models of large-scale brain activity. Nature Neuroscience. 2018; 20(3): 340-352.

  7. 7.

    Townsend RG, Gong P. Detection and analysis of spatiotemporal patterns in brain activity. PLoS computational biology. 2018; 14(12): e1006643.

O11 Experimental and computational characterization of interval variability in the sequential activity of the Lymnaea feeding CPG

Alicia Garrido-Peña1, Irene Elices1, Rafael Levi1, Francisco B Rodriguez2, Pablo Varona2

1Universidad Autonoma Madrid, Grupo de Neurocomputacion Biologica, Madrid, Spain; 2Universidad Autónoma Madrid, Ingeniería Informática, Madrid, Spain

Correspondence: Alicia Garrido-Peña (alicia.garrido@uam.es)

BMC Neuroscience 2020, 21(Suppl 1):O11

Central Pattern Generators (CPG) generate and coordinate motor movements by producing rhythms composed of patterned sequences of activations in their constituent neurons. These robust rhythms are yet flexible and the time intervals that build the neural sequences can adapt as a function of the behavioral context. We have recently revealed the presence of robust dynamical invariants in the form of cycle-by-cycle linear relationships between two specific intervals of the crustacean pyloric CPG sequence and the period [1]. Following the same strategy, the present work characterizes the intervals that build the rhythm and the associated sequence of the feeding CPG of the mollusk Lymnaea Stagnalis. The study entails both the activity obtained in electrophysiological recordings of living neurons and the rhythm produced by a realistic conductance-based model. The analysis reported here first assesses the quantification of the variability of the intervals and the characterization of relationships between the intervals that build the sequence and the period, which allows the identification of dynamical invariants. To induce variability in the CPG model, we use current injection ramps in individual CPG neurons following the stimulation used in experimental recordings in [2]. Our work extends previous analyses characterizing the Lymnaea feeding CPG rhythm from experimental recordings and from modeling studies by considering all intervals that build the sequence [3]. We report the presence of distinct variability in the sequence time intervals and the existence of dynamical invariants, which depend on the neuron being stimulated. The presence of dynamical invariants in CPG sequences, not only in the model but also in two animal species, points out the universality of this phenomena.

Acknowledgements: We acknowledge support from AEI/FEDER PGC2018-095895-B-I00 and TIN2017-84452-R.

References

  1. 1.

    Elices I, Levi R, Arroyo D, Rodriguez FB, Varona P. Robust dynamical invariants in sequential neural activity. Scientific Reports. 2019; 9(1): 1-13.

  2. 2.

    Elliott CJ, Andrew T. Temporal analysis of snail feeding rhythms: a three-phase relaxation oscillator. Journal of Experimental Biology. 1991; 157(1): 391– 408.

  3. 3.

    Vavoulis DV, et al. Dynamic control of a central pattern generator circuit: a computational model of the snail feeding network. European Journal of Neuroscience. 2007; 25(9): 2805–2818.

O12 A spatial developmental generative model of human brain structural connectivity

Stuart Oldham1, Ben Fulcher2, Kevin Aquino3, Aurina Arnatkevičiūtė3, Rosita Shishegar3, Alex Fornito3

1Monash University, Clayton, Australia; 2The University of Sydney, School of Physics, Sydney, Australia; 3Monash University, The Turner Institute for Brain and Mental Health, School of Psychological Sciences and Monash Biomed, Clayton, Australia

Correspondence: Stuart Oldham (sjold1@student.monash.edu)

BMC Neuroscience 2020, 21(Suppl 1):O12

The human connectome has a complex topology that is thought to enable adaptive function and behaviour. Yet the mechanisms leading to the emergence of this topology are unknown. Generative models can shed light on this question, by growing networks in silico according to specific wiring rules and comparing properties of model-generated networks to those observed in empirical data [1]. Models involving trade-offs between the metabolic cost and functional value of a connection can reproduce topological features of human brain networks at a statistical level, but are less successful in replicating how certain properties, most notably hubs, are spatially embedded [2,3]. A potential reason for this limited predictive ability is that current models assume a fixed geometry based on the adult brain, ignoring the major changes in shape and size that occur early in development, when connections form.

To address this limitation, we developed a generative model that accounts for developmental changes in brain geometry, informed by structural MRIs obtained from a public database of foetal scans acquired from 21–38 weeks gestational age [4]. We manually segmented the cortical surface of each brain and registered each surface to an adult template surface using Multimodal Surface Matching [5,6]. This procedure allowed us to map nodes to consistent spatial locations through development and measure how distances between nodes (a proxy for connectome wiring cost) change through development. We evaluated the performance of classic trade-off models [2] that either assume a fixed, adult brain geometry (static), or those where cost-value trade-offs dynamically change in accordance with developmental variations in brain shape and size (growth). We used connectomes generated from 100 healthy adults with diffusion MRI to benchmark model performance. Model fit was calculated by comparing model and empirical distributions of topological properties. An optimisation procedure was used to find the optimal parameters and best-fitting models for each individual adult brain network [2]. For fair comparison of model fit across models of varying parametric complexity, we used a leave-one out cross-validation procedure.

Spatial models (sptl; which include only distance information) produced poorer fits than those involving distance–topology trade-offs. Homophily models (matching, neighbours; where connections form between nodes with common neighbours) were among the best fitting. Growth models produced slightly better fits than static models overall. These results still generally held when the cross-validation procedure was employed (Fig. 1a). Neither growth nor static models reproduced the spatial topography of network hubs, but growth models are associated with a less centralized anatomical distribution of hubs across the brain, which is more consistent with the empirical data (Fig. 1b).

Fig. 1
figure8

a Cross-validation results for static and growth models. Different models are shown on the x-axis and the shading of the boxplot indicates the model type: static models are darker while growth models are lighter. b Hub distribution under different models. The size and colour show the degree of each node (blue/small indicates a low degree node, red/large indicates a high degree node)

In summary, we introduce a new framework for examining how developmental changes in brain geometry influence brain connectivity. Our results suggest that while such changes influence network topology, they are insufficient to explain how complex connectivity patterns emerge in brain networks.

References

  1. 1.

    Betzel RF, Bassett DS. Generative models for network neuroscience: prospects and promise. Journal of The Royal Society Interface. 2017; 14(136): 20170623.

  2. 2.

    Betzel RF, et al. Generative models of the human connectome. Neuroimage. 2016; 124: 1054-64.

  3. 3.

    Zhang X, et al. Generative network models identify biological mechanisms of altered structural brain connectivity in schizophrenia. bioRxiv. 2019; 604322.

  4. 4.

    Gholipour A, et al. A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth. Scientific Reports. 2017; 7: 476.

  5. 5.

    Robinson EC, et al. MSM: a new flexible framework for multimodal surface matching. Neuroimage. 2014; 100: 414-426.

  6. 6.

    Robinson EC, et al. Multimodal surface matching with higher-order smoothness constraints. Neuroimage. 2018; 167: 453-465.

O13 Cortical integration and segregation explained by harmonic modes of functional connectivity

Katharina Glomb1, Gustavo Deco2, Morten L Kringelbach3, Patric Hagmann4, Joel Pearson5, Selen Atasoy3

1Centre Hospitalier Universitaire Vaudois, Department of Radiology, Lausanne, Switzerland; 2Universitat Pompeu Fabra, Barcelona, Spain; 3University of Oxford, Department of Psychiatry, Oxford, United Kingdom; 4University Hospital of Lausanne and University of Lausanne, Department of Radiology, Lausanne, Switzerland; 5University of New South Wales, School of Psychology, Sydney, Australia

Correspondence: Katharina Glomb (katharina.glomb@upf.edu)

BMC Neuroscience 2020, 21(Suppl 1):O13

The idea that harmonic modes - basis functions of the Laplace operator - are meaningful building blocks of brain function are gaining attention [1–3]. We extracted harmonic modes from the Human Connectome Project’s (HCP) dense functional connectivity (dFC), an average over 812 participants’ resting state fMRI dFC matrices. In this case, harmonic modes give rise to functional harmonics. Each functional harmonic is a connectivity gradient [4] that is associated with a different spatial frequency, and thus, functional harmonics provide a frequency-ordered, multi-scale, multi-dimensional description of cortical functional organization.

We propose functional harmonics as an underlying principle of integration and segregation. Figure 1a shows 2 functional harmonics on the cortical surface. In harmonic 11 (ψ11), the two functional regions that correspond to the two hands are on opposite ends of the gradient (different colors on the surface) and are thus functionally segregated. In contrast, in harmonic 7 (ψ7), the two areas are on the same end of the gradient, and are thus integrated. This way, functional harmonics explain how two brain regions can be both functionally integrated and segregated, depending on the context.

Fig. 1
figure9

a Specialized brain regions emerge from continuous gradients in multiple dimensions. In this example, somatotopic areas arise from functional harmonics 3 and 11. b Certain functional harmonics (here, functional harmonic 8) capture retinotopy. Vertices in V1-V4 are plotted in retinotopic space [6] in the same colors as on the flattened surface of the early visual cortex

Figure 1a illustrates how specialized areas emerge from the smooth gradients of functional harmonics: the two hand areas occupy well-separated regions of the space spanned by ψ7 and ψ11. Thus, functional harmonics unify two perspectives, a view where the brain is organized in discrete modules, and one in which function varies gradually [4].

The borders drawn on the cortex correspond to functional areas in the HCP’s multimodal parcellation [5]. In this example, the isolines of the gradients of the functional harmonics follow the borders. We quantified how well, in general, the first 11 functional harmonics follow the borders of cortical areas by comparing the variability of the functional harmonics within and between the areas given by the HCP parcellation; i.e. we computed the silhouette value (SH), averaged over all 360 cortical areas. The SH lies between 0 and 1, where 1 means perfect correspondence between isolines and parcels. We found average SHs between 0.65 (ψ10) and 0.85 (ψ1), indicating a very good correspondence. Thus, functional harmonics capture the “modular perspective” of brain function.

On the other hand, several functional harmonics are found to capture topographic maps and thus, gradually varying function. One important example is retinotopic organization of the visual cortex. Figure 1b shows functional harmonic 8 (ψ8) as an example in which both angular and eccentricity gradients are present [6]. Topographic organization is also found in the somatosensory/motor cortex, known as somatotopy. This is shown in Figure 1a, where several somatotopic body areas are reproduced.

Taken together, our results show that functional specialization, topographic maps, and the multi-scale, multi-dimensional nature of functional networks are captured by functional harmonics, thereby connecting these empirical observations to the general mathematical framework of harmonic eigenmodes.

References

  1. 1.

    Atasoy S, Donnelly I, Pearson J. Human brain networks function in connectome-specific harmonic waves. Nature Communications. 2016; 7: 10340.

  2. 2.

    Robinson PA, et al. Eigenmodes of brain activity: Neural field theory predictions and comparison with experiment. Neuroimage. 2016; 142: 79-98.

  3. 3.

    Tewarie P, et al. How do spatially distinct frequency specific MEG networks emerge from one underlying structural connectome? The role of the structural eigenmodes. NeuroImage. 2019; 186: 211-220.

  4. 4.

    Margulies DS, et al. Situating the default-mode network along a principal gradient of macroscale cortical organization. Proceedings of the National Academy of Sciences. 2016; 113(44): 12574-12579.

  5. 5.

    Glasser MF, et al. A multi-modal parcellation of human cerebral cortex. Nature. 2016; 536(7615): 171–178.

  6. 6.

    Benson NC, et al. The Human Connectome Project 7 Tesla retinotopy dataset: Description and population receptive field analysis. Journal of Vision. 2018; 18(13): 23-23.

O14 Reconciling emergences: an information-theoretic approach to identify causal emergence in multivariate data

Pedro Mediano1, Fernando Rosas2, Henrik Jensen3, Anil Seth4, Adam Barrett4, Robin Carhart-Harris2, Daniel Bor1

1University of Cambridge, Department of Psychology, Cambridge, United Kingdom; 2Imperial College London, Department of Medicine, London, United Kingdom; 3Imperial College London, Department of Mathematics, London, United Kingdom; 4University of Sussex, Department of Informatics, Brighton, United Kingdom

Correspondence: Pedro Mediano (pam83@cam.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):O14

The broad concept of emergence is instrumental in various key open scientific questions – yet, few quantitative theories of what constitutes emergent phenomena have been proposed. We introduce a formal theory of causal emergence in multivariate systems, which studies the relationship between the dynamics of parts of a system and macroscopic features of interest. Our theory provides a quantitative definition of downward causation, and introduces a complementary modality of emergent behaviour, which we refer to as causal decoupling. Moreover, we provide criteria that can be efficiently calculated in large systems, making the theory applicable in a range of practical scenarios. We illustrate our framework in a number of case studies, including Conway’s Game of Life and ECoG data from macaques during a reaching task, which suggest that the neural representation of motor behaviour may be causally decoupled from cortical activity.

P1 Homeostatic recognition circuits emulating network-wide bursting and surprise

Tsvi Achler

Optimizing Mind, Palo Alto, CA, United States of America

Correspondence: Tsvi Achler (achler@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P1

Understanding the circuits of recognition is essential to build a deeper understanding of virtually all of the brains behaviors and circuits.

The goal of this work is to capture simultaneous findings on both the neural and behavioral levels, namely Network Wide Bursting (NWB) dynamics with surprise (unexpected inputs), using a hypothesized recognition circuit based on the idea of homeostasis flow.

If real neural brains at a resting state are presented with an unexpected or new stimulus, the brain network shows a fast network-wide increase in activation (NWB of many neurons) followed by a slower inhibition, until the network settles again to a resting state. Bursting phenomena during recognition is found ubiquitously in virtually every type of organism, within isolated brain dissections and even neural tissue grown in a dish (Fig. 1). Its source and function remain poorly understood. Behavioral manifestation of surprise can be observed if the input is much unexpected and may involve multiple brain regions.

Fig. 1
figure10

Network-Wide Bursting. Left: A 26 node Homeostatic network trained on MNIST digit recognition data & its response to a numeral digit 1 presented at time 0. The nodes for all digits 0-9 briefly burst but then settle down, leaving only node 1 active. Right: example neuron recordings from audio cortex & surrounding regions in response to sound stimuli (modified from Lakatos et al. 2005)

The homeostatic flow model posits that activation from inputs is balanced with top down pre-synaptic regulatory feedback from output neurons. Information is projected from inputs to outputs with forward connections then back to inputs with backwards homeostatic connections which inhibits the inputs. This effectively acts to balance the inputs & outputs (homeostasis) and generates an internal error-dependent input. This homeostatic input is then projected again to outputs and back again until output values relate recognition. This occurs during recognition and no weights are learned.

When a surprise or unexpected input stimulus is presented, NWB occurs because the homeostatic balance is disturbed with the new stimulus. The system subsequently calms down as it settles back to a new homeostasis.

In comparing to existing models, this circuit is different from Adaptive Resonance Theory because: 1) no lateral connections are required (inhibitory or otherwise) 2) all neurons feed backwards pre-synaptically at the same time 3) there is no vigilance parameter. It is different from Hopfield networks because instead of top-down feedback being positive, it is negative (inhibitory & homeostatic). This changes the functions and dynamics of the model making it stable: its dynamics eventually converge to steady state as long as inputs do not change.

The homeostatic feedback should not be confused with error of learning algorithms since: 1) it is implemented during recognition 2) does not adjust any weights at any time 3) not generated using training data. It is different from generative and predictive coding models because 1) it is primarily used during recognition not learning 2) the generative and recognition components are inseparable and contained within a single integrated homeostatic circuit.

The network is connectionist but approximates a Bayesian network by: 1) homeostatic weights are roughly equivalent to Bayesian likelihood values 2) output values can behave as Bayesian priors if they are maintained externally or if inputs suddenly change. Maintaining priors changes circuit recognition and dynamics without changing weights.

Learning can be achieved with simple Hebbian learning, obtaining weights that are similar to Bayesian likelihood. Both directions of the homeostatic process learn the same weights. Single layer learning is demonstrated with standard MNIST digits while capturing the neural findings of NWB.

P2 The monosynaptic inference problem: linking statistics and dynamics in ground truth models

Zach Saccomano1, Rodrigo Pena2, Sam Mckenzie3, Horacio Rotstein4, Asohan Amarasingham5

1City University of New York, Biology, New York, United States of America; 2New Jersey Institute of Technology, Federated Department of Biological Sciences, Newark, United States of America; 3New York University, Neuroscience Institute, New York, United States of America; 4New Jersey Institute of Technology, Federated Department of Biological Sciences, NJIT / Rutgers University, Newark, United States of America; 5New Jersey Institute of Technology, Biological Sciences, New York, United States of America

Correspondence: Zach Saccomano (zsaccomano@ccny.cuny.edu)

BMC Neuroscience 2020, 21(Suppl 1):P2

Given the ability to record spike trains from populations of neurons, a natural aim in neuroscience is to infer properties of synapses, including connectivity maps, from such recordings. These inferences can derive from observations of strong millisecond-timescale correlations among spike train pairs, as typically reflected by a sharp, short-latency peak in the causal direction of cross-correlograms (CCG) between the reference and target neurons. However, such sharp peaks may also occur when two disconnected neurons systematically fire close together in time in the absence of a direct monosynaptic connection. A further confound is that a monosynapse likely influences the postsynaptic cell on broader timescales as well. These observations motivate a systematic analysis of how a monosynapse exerts influence on the intrinsic dynamics of its postsynaptic target and how this affects the properties of the CCG and the ability to infer the monosynaptic properties. In previous work [1], we adapted a statistical framework for monosynaptic inference based on a (statistical) separation-of-timescale principle, in which monosynaptic interactions are systematically assumed to drive spike-spike correlations at finer timescales than non-monosynaptic interactions. We examined this principle in a simplified ground truth neuron model with minimal intrinsic dynamics, such as the leaky integrate-and-fire (LIF) model with an adaptive threshold. In this work, we extend these ideas to more realistic models and multiple time scales. We use a generalized LIF model with two-dimensional subthreshold dynamics and multiple (dynamic) time scales. The model describes the nonlinear dynamics of the voltage and a slower adaptation gating variable. These subthreshold dynamics also describe the onset of spikes, but not the spiking dynamics. Spikes are added manually. Our previous work exploited our ability to study counterfactual causal inferences in simulations. For example, in simulations with two neurons and comodulated noise, how much would the peak of the CCG change if the monosynapse were deleted? Here we extend this approach to the more complex models where the properties of the CCG are affected by the model nonlinearities and time scales. In this scenario, the model’s slow time scale (captured by the adaptation time constant) affects the CCG time scales (an emergent property of the monosynaptic interaction). Finally, we assess how bias induced by the separation-of-timescale principle (in the statistical sense) depends on the intrinsic dynamics of postsynaptic cells, in particular on the separation of time scales in the dynamic modeling sense.

References

  1. 1.

    Platkiewicz J, Saccomano Z, McKenzie S, English D, Amarasingham A. Monosynaptic inference via finely-timed spikes. arXiv. 2019; 1909.08553 [preprint].

P3 Using higher-order networks to analyze non-linear dynamics in neural network models

Xerxes Arsiwalla

Pompeu Fabra University, Barcelona, Spain

Correspondence: Xerxes Arsiwalla (x.d.arsiwalla@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P3

In linear dynamical systems, one has an elegant way to analyze the system’s dynamics using a network representation of the state transition matrix, obtained from a state space formulation of the system of ODEs. However, in non-linear systems, there is no state space formulation to begin with. In recent work, we have established a correspondence between non-linear dynamical systems and higher-order networks [1] (see also [2]). The latter refer to graphs that include links between nodes and edges, as well as links between two edges. It turns out that such networks have a rich structure capable of representing non-linearities in the vector field of dynamical systems. To do this, one has to first dimensionally unfold a system of non-linear ODEs such that non-linear terms in the vector field can be re-expressed using auxiliary dynamical variables. This results in an unfolded dynamical system with only polynomial non-linearities. This operation works for a large class of non-linear systems. It turns out that once we have a polynomial vector field, the system can then be expressed in generalized state space form. This is what ultimately admits a graphical representation of the system. However, the resulting graph consists of higher-order edges. This generalizes the more common usage of networks with dyadic edges to networks with compounded edges. Here, we show an application of these graphs to analyze neural networks built from sigmoidal rate models as well as mean-field models. Higher-order graphs enable one to systematically decompose contributions of various non-linear terms to the dynamics as well as analyze stability and control of the system using network properties.

References

  1. 1.

    Arsiwalla XD. A functorial correspondence between non-linear dynamical systems and higher-order networks. 2020 [submitted].

  2. 2.

    Baez JC, Pollard BS. A compositional framework for reaction networks. Reviews in Mathematical Physics. 2017; 29(09): 1750028.

P4 Self-organization of connectivity in spiking neural networks with balanced excitation and inhibition

Jihoon Park1, Yuji Kawai2, Minoru Asada2

1Osaka University, Suita, Japan; 2Osaka University, Institute for Open and Transdisciplinary Research Initiatives, Suita, Japan

Correspondence: Jihoon Park (jihoon.park@otri.osaka-u.ac.jp)

BMC Neuroscience 2020, 21(Suppl 1):P4

Atypical neural activity and structural network changes have been detected in the brains of autism spectrum disorder (ASD) [1]. It has been hypothesized that an imbalance in the activity of excitatory and inhibitory neurons causes the pathological changes in autistic brains, denoted by the E/I balance hypothesis [2]. In this study, we investigate the effect of E/I balance on the self-organization of network connectivity and neural activity using a model approach. Our model follows the Izhikevich spiking neuron model [3], and consists of three neuron groups, each composed of 800 excitatory neurons and NI inhibitory neurons (Fig. 1A). Each excitatory neuron had 100 intraconnections with randomly selected neurons in the same neuron group, and 42 inter-connections with randomly selected neurons in its neighboring neuron group. These synaptic weights were modified using the Spike-timing-dependent plasticity rule [3]. Each inhibitory neuron had 100 intraconnections with randomly selected excitatory neurons in the same neuron group, but they did not have any interconnections nor plasticity. We simulated the model with different and inhibitory synaptic weights (WI) in one neuron group (neuron group 1 in Fig. 1A) to change the degree of inhibition in the neuron group. NI and WI in the other groups (2 and 3 in Fig. 1A) were set to 200 and -5, respectively. The simulation results show greater intraconnections in all neuron groups when NI and WI were lower values, i.e., the E/I ratio increased compared to those in the typical E/I ratio (Fig. 1B). Moreover, asymmetric interconnections between neuron groups emerged where the synaptic weights from neuron groups 2 to 1 were higher than when the connectivity was in the opposite direction (Fig. 1C), where the E/I ratio was found to increase. Furthermore, the phase coherence between the average potentials of neuron groups was found to be weak with an increased E/I ratio (Fig. 1D). These results indicate that the disruption of the E/I balance, especially the weak inhibitory, induces excessive local connections and asymmetric intergroup connections. Therefore, the synchronization between neuron groups decreases, i.e., there is a weak long-range functional connectivity. These results suggest that the E/I imbalance might cause strong local anatomical connectivity and weak long-range functional connectivity in the brains of ASD [1].

Fig. 1
figure11

Model overview and results. A A model consisting of neuron groups. Neuron group 1 has controlled inhibitory neurons and does not directly connect with neuron group 3. B Average weights of intraconnections in each neuron group after self-organization. C Average weights of interconnections among neuron groups after self-organization. D Phase coherence between neuron groups 1 and 3

Acknowledgements: This work was supported by JST CREST Grant Number JPMJCR17A4 including the AIP challenge (conceptualization and resources), and a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO; implementation and data curation).

References

  1. 1.

    Belmonte MK, et al. Autism and abnormal development of brain connectivity. Journal of Neuroscience. 2004; 24: 9228–9231.

  2. 2.

    Nelson SB, Valakh V. Excitatory/inhibitory balance and circuit homeostasis in autism spectrum disorders. Neuron. 2015; 87: 684–698.

  3. 3.

    Izhikevich EM. Polychronization: computation with spikes. Neural Computation. 2006; 18: 245–282.

P5 Mathematical modeling of the circadian clock in spiders

Daniel Robb1, Lu Yang2, Nadia Ayoub2, Darrell Moore3, Thomas Jones3, Natalia Toporikova2

1Roanoke College, Department of Mathematics, Computer Science and Physics, Salem, Virginia, United States of America; 2Washington and Lee University, Department of Biology, Lexington, Virginia, United States of America; 3East Tennessee State University, Department of Biological Sciences, Johnson City, Tennessee, United States of America

Correspondence: Daniel Robb (robb@roanoke.edu)

BMC Neuroscience 2020, 21(Suppl 1):P5

The circadian clock in organisms produces oscillations in neural and physiological functions with an intrinsic period on the order of the 24-hour circadian day. The circadian clock, controlled by the suprachiasmatic nucleus (SCN) in the brain, is entrained by light-dark cycles, so that organisms synchronize these essential oscillations with external conditions. The clock has been studied extensively in flies and yeast, among other species, with intrinsic periods ranging from 22-26 hours [1], and a fairly limited range of entrainment to external light-dark cycles. Interestingly, our previous work has shown that spiders have a significantly wider range of intrinsic periods, from 19-30 hours, and an ability to entrain to a much wider range of applied external light-dark cycles. To identify a potential mechanism for the unusual circadian clock in spiders, we have developed a mathematical model for the spider circadian clock which incorporates negative feedback. We have used the model to investigate two possible mechanisms for the wide range of entrainment in spiders. First, light could be a ‘strong stimulus’ which acts powerfully on a circadian clock of typical strength. Second, the circadian clock in spiders could be ‘weak’, i.e., not as robust to perturbations, relative to that of other species. To distinguish between these two mechanisms, a bifurcation analysis of the model has been performed, as shown in the figure (Fig. 1). Our model makes several testable predictions. In the ‘strong stimulus’ scenario, we predict a faster onset of locomotor activity for spiders with shorter intrinsic circadian periods with a change in the applied light cycle, and a slower onset of locomotor activity for spiders with longer intrinsic periods. In addition, the ‘strong stimulus’ scenario could often lead to a lack of locomotor activity in constant light. In contrast, the ‘weak clock’ scenario predicts a strong dependence of entrainment on the light intensity, and that, once achieved, entrainment would lead to an increase in the amplitude of model oscillations. The balance of our computational modeling and experimental results currently favors the weak clock scenario.

Fig. 1
figure12

Bifurcation diagram of circadian clock model identifying weak, medium, and strong regimes

References

  1. 1.

    Johnson CH, Elliott J, Foster R, Honma K-I, Kronauer R. Fundamental properties of circadian rhythms. In: Chronobiology: Biological Timekeeping. Sinauer. 2004; 67-105.

P6 Inference of functional connectivity in living neural networks

Sarah Marzen1, Martina Lamberti2, Michael Hess3, Jacob Mehlman4, Denise Hernandez4, Joost le Feber2

1Pitzer, Scripps, and Claremont McKenna College, Physics, Claremont, California, United States of America; 2University of Twente, Clinical Neurophysiology, Twente, Netherlands; 3Claremont McKenna College, Computational Neuroscience, Claremont, California, United States of America; 4Claremont McKenna College, Physics, Claremont, California, United States of America

Correspondence: Sarah Marzen (smarzen@cmc.edu)

BMC Neuroscience 2020, 21(Suppl 1):P6

In experiments with stimuli, we often wish to assess changes in connectivity between neurons as the experiment progresses. There are a number of methods for assessing connectivity with a variety of drawbacks, but it is not clear that these methods are connected to one another. Furthermore, it is not clear that these functional connectivities are connected to real synaptic connectivities. We present some evidence that functional connectivities from two disparate methods (Conditional Firing Probability analysis and Maximum Entropy analysis) and synaptic connectivities in one dynamical model (that of leaky integrate-and-fire neurons) are all related.

Fig. 1
figure13

The predicted Ji,j + Jj,I is compared to the true Ji,j + Jj,I for a recording of spontaneous activity from a neuronal culture. Only excitatory connections are included. The analytic expression requires that Δt be sufficiently small, and here Δt is 3 ms—not small enough. Even so, we see a strong relationship between predictions and measurements with R = 0.5

P7 Functional dissection of prefrontal networks using optogenetics and large-scale multi-electrode recordings

Eloise V Giraud1, Jean-Philippe Thivierge1, Michael Lynn2, Jean-Claude Beique2

1University of Ottawa, School of Psychology, Ottawa, Canada; 2University of Ottawa, Cellular and Molecular Medicine, Ottawa, Canada

Correspondence: Eloise V Giraud (egira024@uottawa.ca)

BMC Neuroscience 2020, 21(Suppl 1):P7

The prefrontal cortex (PFC) plays an important role in executive functions that guide reward-seeking, goal-directed and memory-guided behaviours. However, the contribution of specific cell types to the activity of broader cortical circuits remains largely unknown. This is important to inform accurate computational models of prefrontal cortex function. Here, we used high-density multielectrode arrays containing 4,096 closely spaced electrodes to monitor the spiking activity of PFC neurons in acute slice preparations. We developed spike sorting techniques that combined spline interpolation and principal component analysis to distinguish regular-spiking excitatory neurons from fast-spiking inhibitory interneurons. Our sorting algorithm was validated using a targeted combination of viral and optogenetic strategies. By cell-type-specific optogenetic stimulation, we described how parvalbumin interneurons regulate the interplay between excitation and inhibition. Specifically, we characterized the influence of parvalbumin interneurons on network-wide firing rates and distance-dependent pairwise correlations within the PFC. These results form a key target for computational models that aim to capture the interactions between excitation and inhibition in cortical areas.

P8 Average beta burst duration profiles provide a signature of dynamical changes between the ON and OFF medication states in Parkinson’s disease

Benoit Duchet1,2, Filippo Ghezzi3, Gihan Weerasinghe1,2, Gerd Tinkhauser4, Andrea A Kuhn5, Peter Brown1,2, Christian Bick6,7,8, Rafal Bogacz1,2

1University of Oxford, Nuffield Department of Clinical Neuroscience, Oxford, United Kingdom; 2University of Oxford, MRC Brain Network Dynamics Unit, Oxford, United Kingdom; 3University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, United Kingdom; 4Bern University Hospital and University of Bern, Department of Neurology, Bern, Switzerland; 5Charité – Universitätsmedizin Berlin, Department of Neurology, Movement Disorder and Neuromodulation Unit, Berlin, Germany; 6University of Oxford, Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, Oxford, United Kingdom; 7University of Exeter, Centre for Systems, Dynamics, and Control and Department of Mathematics, Exeter, United Kingdom; 8University of Exeter, EPSRC Centre for Predictive Modelling in Healthcare, Exeter, United Kingdom

Correspondence: Benoit Duchet (benoit.duchet@ndcn.ox.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P8

Parkinson’s disease motor symptoms are associated with an increase in subthalamic nucleus beta band oscillatory power. However, these oscillations are phasic, and a growing body of evidence suggests that beta burst duration may be of critical importance to motor symptoms, making insights into the dynamics of beta bursting generation valuable. In this study, we ask the question “Can average burst duration reveal how dynamics change between the ON and OFF medication states?”. Our analysis of local field potentials from the subthalamic nucleus demonstrates using linear surrogates that the system generating beta oscillations acts in a more non-linear regime OFF medications and that the change in the degree of non-linearity is correlated with motor impairment. We further narrow-down dynamical changes responsible for changes in temporal patterning of beta oscillations between medication states by fitting to data biologically inspired models, and simpler models of the beta envelope (Fig 1). Finally, we show that the non-linearity can be directly extracted from average burst duration profiles under the assumption of constant noise in envelope models, revealing that average burst duration profiles provide a window into burst dynamics, which may underlie the success of burst duration as a biomarker. In summary, we have demonstrated a relationship between average burst duration profiles, dynamics of the system generating beta oscillations, and motor impairment, which puts us in a better position to understand the pathology and improve therapies.

Fig. 1
figure14

A, B Average burst duration profiles are obtained by computing beta envelope average burst duration for a range of thresholds. Considering envelope models of the form dXt=- µ(Xt)dt + ζdWt, where µ is the drift function, W a Wiener process, and ζ a constant noise parameter, we illustrate with two examples the link between envelope dynamics (C1, C2) and average burst duration profiles (E1, E2)

P9 Introducing EBRAINS: European infrastructure for brain research

Wouter Klijn1, Sandra Diaz1, Sebastian Spreizer2, Thomas Lippert3, Spiros Athanasiou4, Yannis Ioannidis5, Abigail Morrison6, Evdokia Mailli4, Katrin Amunts7, Jan Bjaalie8

1Forschungszentrum Jülich, Jülich Supercomputing Centre, Simulation Lab Neuroscience,, Jülich, Germany; 2University of Freiburg, Bernstein Center Freiburg, Freiburg, Germany; 3Forschungszentrum Jülich, Institute for Advanced Simulation, Jülich Supercomputing Centre, Jülich, Germany; 4Athena Research & Innovation Center, Marousi, Greece; 5National & Kapodistrian University of Athens, Informatics & Telecom, Athens, Greece; 6Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6), Jülich, Germany; 7Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM 1), Jülich, Germany; 8University of Oslo, Institute of Basic Medical Sciences, Oslo, Norway

Correspondence: Wouter Klijn (w.klijn@fz-juelich.de)

BMC Neuroscience 2020, 21(Suppl 1):P9

The Human Brain Project (HBP), the ICT-based Flagship project of the EU, is developing EBRAINS - a research infrastructure providing tools and services which can be used to address challenges in brain research and brain-inspired technology development. EBRAINS will allow the creation of the necessary synergy between different national efforts to address one of the most challenging targets of research. This presentation will illustrate the services of the EBRAINS infrastructure with three use cases spanning the immensely diverse neuroscience field.

The first case is about Viktoria, a researcher who received a grant to investigate the distribution of interneuron types in the cortex and their activity under specific conditions. She needs a place to store, publish and share the data collected to add to the body of knowledge on the human brain. She contacts the HBP service desk and her case is forwarded to the data curation team, a part of the EBRAINS High Level Support Team. The data curators provide data management support and help make her data FAIR by registering it in the Knowledge Graph [1] and the Brain Atlas [2]. Her data is stored for 10 years, given a DOI to allow citations, and can be used by tools integrated in EBRAINS.

The second case is about Johanna, who has developed a software package for the analysis of iEEG data and now wants this tool to be used by as many researchers as possible. She contacts the HBP service desk and is put in contact with the EBRAINS technical coordination team. A co-design process is started together with the co-simulation framework developers, and her software is integrated into the simulation and analysis framework. After integration, Johanna’s tool can now be used with experimental data as well as to simulated iEEG data. Her tool is integrated into the operations framework of EBRAINS and is easily deployed on the HPC resources available through EBRAINS.

The third use case is about Jim, a neuroscience professor with a strong focus on teaching. After learning about the HBP he explores the EBRAINS website and discovers the wide range of educational tools available. NEST Desktop (https://github.com/babsey/nest-desktop), for instance, is a web accessible interface for spiking neuron networks. It allows the creation of a complete simulation with less than 10 mouse clicks, without the need to install any software. The output of the simulation can be then ported to Jupyter notebooks hosted on the EBRAINS’ systems to perform additional analysis. The functionality is accompanied with online MOOCs and detailed documentation to provide him with enough material to fill multiple courses on neuroscience.

With the EBRAINS infrastructure the HBP is delivering a set of tools and services in support of all aspects of neuroscience research. Get more information at: www.ebrains.eu or email the service desk at: support@ebrains.eu

Acknowledgments: This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2).

References

  1. 1.

    Amunts K, et al. The Human Brain Project—Synergy between neuroscience, computing, informatics, and brain-inspired technologies. PLoS biology. 2019; 17(7): e3000344.

  2. 2.

    Bjerke IE, et al. Data integration through brain atlasing: Human Brain Project tools and strategies. European Psychiatry. 2018; 50: 70–76.

P10 Using adaptive exponential integrate-and-fire neurons to study general principles of patho-topology of cerebellar networks

Maurizio De Pitta1, Jose A O Rodrigues2, Juan Sustacha3, Giulio Bonifazi1, Alicia Nieto-Reyes2, Sivan Kanner4, Miri Goldin3, Ari Barzilai5, Paolo Bonifazi3

1Basque Center for Applied Mathematics, Bilbao, Spain; 2University of Cantabria, Department of Mathematics, Statistics and Computer Science, Santander, Spain; 3BioCruces Health Research Institute, Barakaldo, Spain; 4Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland; 5Tel Aviv University, Department of Life Sciences, Ramat Aviv, Israel

Correspondence: Maurizio De Pitta (maurizio.depitta@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P10

(A-T) is an example of a systemic genetic disease impacting the cerebellar circuit’s structure and function. Kanner et al. [1] have shown how the A-T phenotype in mice correlates with severe glial atrophy and increased synaptic markers, resulting in altered cerebellar networks’ dynamics. In particular, experiments in cerebellar cultures showed a disruption of networks’ synchronizations, which were recovered by replacements of A-T glial cells with healthy ones. Notably, the only presence of healthy astrocyte was sufficient to restore the physiological synaptic puncta level between mutated neurons. In the intact cerebellar circuits, glial morphological alterations and an increase in inhibitory synaptic connectivity markers were first reported and correlated (preliminary unpublished results) with an increase in the complex spiking of the Purkinje cells (PCs). In order to understand and model these structural-functional circuits’ alterations, we developed a simplified model of the cerebellar circuit. To this aim, we adopt the adaptive Exponential Integrate-and-Fire (aEIF) neuron model in different parameter configurations, to capture essential functional features of four different cell types: granule cells and excitatory neurons of the inferior olive (IONs); Purkinje cells and inhibitory neurons of the Deep Cerebellar Nuclei (DPNNs). Next, we explore different degrees of connectivity and synaptic weights, the dynamics of the simplified cerebellar circuitry. Our simulations suggest that the concomitant increased number of inhibitory connections from PC to DPNNs, and from DPNNs to IONs, ultimately results in a disinhibited IONs dynamics. As a consequence, IONs provide a higher rate of excitation to PCs within the cerebellar loop, which finally leads to higher complex spiking frequency in PCs. These results provide new insights into the dysfunctional A-T cerebellar dynamics and open a new perspective for targeted pharmacological treatments.

Acknowledgements: We thank the ‘Junior Leader’ Fellowship Program by ‘la Caixa’ Banking Foundation (Grant LCF/BQ/LI18/11630006).

Reference

  1. 1.

    Kanner S, et al. Astrocytes restore connectivity and synchronization in dysfunctional cerebellar networks. Proceedings of the National Academy of Sciences. 2018; 115(31): 8025-8030.

P11 Effect of interglomerular inhibitory networks on olfactory bulb odor representations

Daniel Zavitz1, Isaac Youngstrom2, Matt Wachowiak2, Alla Borisyuk1

1University of Utah, Department of Mathematics, Salt Lake City, United States of America; 2University of Utah, Department of Neurobiology and Anatomy, Salt Lake City, United States of

Correspondence: Daniel Zavitz (zavitz@math.utah.edu)

BMC Neuroscience 2020, 21(Suppl 1):P11

Lateral inhibition is a fundamental feature of circuits that process sensory information. In the mouse olfactory system, inhibitory interneurons called short axon cells initially mediate lateral inhibition between glomeruli, the functional units of early olfactory coding and processing. However, their interglomerular connectivity and its impact on odor representations is not well understood. To explore this question, we constructed a computational model of the interglomerular inhibitory network using detailed characterizations of short axon cell morphologies and simplified intraglomerular circuitry. We then examined how this network transformed glomerular patterns of odorant-evoked sensory input (taken from previously-published datasets) at different values of interglomerular inhibition selectivity. We examined three connectivity schemes: selective (each glomerulusconnects to few others with heterogeneous strength), nonselective (glomeruli connect to most others with heterogeneous strength) and or global (glomeruli connect to all others with equal strength). We found that both selective and nonselective interglomerular networks could mediate heterogeneous patterns of inhibition across glomeruli when driven by realistic sensory input patterns, but that global inhibitory networks were unable to produce input-output transformations that matched experimental data.

We further studied networks whose interglomerular connectivity was tuned by sensory input profile. We found that this network construction improved contrast enhancement as measured by decorrelation of odor representations. These results suggest that, despite their multiglomerular innervation patterns, short axon cells are capable of mediating odorant-specific patterns of inhibition between glomeruli that could, theoretically, be tuned by experience or evolution to optimize discrimination of particular odorants.

P12 A population level model of movement disorder oscillations and deep brain stimulation

Nada Yousif1, Peter Bain2, Dipankar Nandi3, Roman Borisyuk4

1University of Hertfordshire, Hatfield, United Kingdom; 2Imperial College London, Division of Brain Sciences, London, United Kingdom; 3Imperial College Healthcare NHS Trust, Neurosurgery, London, United Kingdom; 4University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter, United Kingdom

Correspondence: Nada Yousif (n.yousif@herts.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P12

The presence of neural oscillations is thought to be a hallmark of two of the most common movement disorders, Parkinson’s disease (PD) and Essential tremor (ET). The symptoms of PD are tremor, slowness of movement and stiffness and is caused by the loss of dopaminergic neurons in the substantia nigra. Although the pathological changes in the basal ganglia network are not yet fully understood, it is widely accepted that beta-band (15-30 Hz) oscillations play a role. Essential tremor (ET) affects up to one percent of adults over 40 years of age and is characterized by an uncontrollable shaking of the affected body part [1-3]. The neurophysiological basis of ET remains unknown, but pathological neural oscillations in the thalamocortical-cerebellar network are also implicated in generating symptoms.

In our previous network study of PD, we studied how a multi-channel model of Wilson-Cowan oscillators representing the STn-GPe behaved in healthy and Parkinsonian conditions. We found that oscillations exist for a much wider range of parameters in the Parkinsonian case and demonstrated how an input representing DBS caused the oscillations to become chaotic and flattened the power spectrum. Looking at ET, we again used a mean-field approach combined with intraoperative local field potential recordings from the Vim via DBS electrodes, and simultaneous electromyographic activity from the contralateral affected limb(s). We used the Wilson-Cowan approach to model the thalamocortical-cerebellar network implicated in ET. We found that the network exhibited oscillatory behaviour within the tremor frequency range of 4-5 Hz, as did our electrophysiological data. Applying a DBS-like input to the modelled network had the effect of suppressing these oscillations. Our two previous studies therefore show that the dynamics of the cerebellar-basal ganglia thalamocortical network support oscillations at frequency ranges relevant to movement disorders. The application of a DBS-like input into the modelled networks disrupts such pathological activity. We believe that this is an important way to study the impact of DBS on the human brain and should be used in conjunction with experimental recordings of neural activity as well as with single neuron biophysical modelling work.

In this work we present new results from a combined model which exhibits Parkinsonian oscillations in the beta band, oscillations in the tremor frequency range, as well as oscillations in the gamma band which we term healthy [4,5]. We find critical boundaries in the parameter space of the model separating regions with different dynamics. We go on to examine the transition from one oscillatory regime to another behavior and the impact of DBS on these two types of pathological activity. This approach will not only allow us to better understand the mechanisms of DBS, but allow us to optimize the lengthy and difficult clinical process of parameter setting via trial and error, upon which the cited improvement in symptoms is reliant [6,7]. Furthermore, with the advent of electrodes with more contacts this process is becoming increasingly difficult. Hence, the need for a theoretical understanding of DBS is particularly important at present.

References

  1. 1.

    Louis ED, Ottman R, Allen Hauser W. How common is the most common adult movement disorder? Estimates of the prevalence of essential tremor throughout the world. Movement Disorders. 1998; 13(1): 5-10.

  2. 2.

    Brin MF, Koller W. Epidemiology and genetics of essential tremor. Movement Disorders. 1998; 13(S3): 55-63.

  3. 3.

    Deuschl G, Bain P, Brin M, Ad Hoc Scientific Committee. Consensus statement of the movement disorder society on tremor. Movement Disorders. 1998; 13(S3): 2-23.

  4. 4.

    Beudel M, Brown P. Adaptive deep brain stimulation in Parkinson’s disease. Parkinsonism & related disorders. 2016; 22: S123-S126.

  5. 5.

    Fischer P, et al. Subthalamic nucleus gamma activity increases not only during movement but also during movement inhibition. Elife. 2017; 6: e23947.

  6. 6.

    Rizzone M, et al. Deep brain stimulation of the subthalamic nucleus in Parkinson’s disease: effects of variation in stimulation parameters. Journal of Neurology, Neurosurgery & Psychiatry. 2001; 71(2): 215-219.

  7. 7.

    Moro E, et al. The impact on Parkinson’s disease of electrical parameter settings in STN stimulation. Neurology. 2002; 59(5): 706-713.

P13 Organization of connectivity between areas in the monkey frontoparietal network

Bryan Conklin1, Steve Bressler2

1Florida Atlantic University, Center for Complex Systems & Brain Science, West Palm Beach, Florida, United States of America; 2Florida Atlantic University, West Palm Beach, Florida, United States of America

Correspondence: Bryan Conklin (bconkli4@fau.edu)

BMC Neuroscience 2020, 21(Suppl 1):P13

Anatomical projections between cortical areas are known to condition the set of observable functional activity in a neural network. The large-scale cortical monkey frontoparietal network (FPN) has been shown to support complex cognitive functions. However, the organization of anatomical connectivity between areas in the FPN supporting such behavior is unknown. To identify the connections in this network, over 40 tract-tracing studies were collated according to the Petrides & Pandya [1] parcellation scheme, which provides a higher resolution map for the areas making up the FPN than other schemes. To understand how this structural profile can give rise to cognitive functions, a graph theoretic investigation was conducted in which the FPN’s degree distribution, structural motifs and small-worldness were analyzed. We present a new connectivity matrix detailing the anatomical connections between all frontal and parietal areas of the parcellation scheme. First, this matrix was found to have in and out-degree distributions that did not follow a power-law. Instead they were each best approximated by a Gaussian distribution, signifying that the connectivity of each area in the FPN is relatively similar and that it does not rely on hubs. Second, the dynamical relay motif, M9, was found to be overrepresented in the FPN. This 3-node motif is the optimal arrangement for near-zero and non-zero phase synchrony to propagate through the network. Finally, the FPN was found to utilize a small-world architecture. This allows for simultaneous integration and specialization of function. Important aspects of cognition such as attention and working memory have been shown to require both integration and specialization in order to function properly using near-zero and non-zero phase synchrony. Further, they benefit from the reliability afforded by the FPN’s homogenous connectivity profile which acts as a substrate resilient to targeted structural insult but vulnerable to a random attack. This suggests the diseases that impair cognitive function supported by the FPN may owe their effectiveness to a random attack strategy. These findings provide a candidate topological mechanism for the synchrony observed during complex cognitive functions in the M9 dynamical relay motif. The results also serve as a benchmark to be used in the network-level treatment of neurological disorders such as Alzheimer’s or Parkinson’s disease where the types of cognition the FPN supports are impaired. Finally, they can inform future neuromorphic circuit designs which aim to perform certain aspects of cognition.

References

  1. 1.

    Petrides M, Pandya DN. Efferent Association Pathways from the Rostral Prefrontal Cortex in the Macaque Monkey. Journal of Neuroscience. 2007; 27: 11573–11586.

P14 Avalanches, criticality and correlations in self-organised nanoscale networks

Josh Mallinson1, Shota Shirai1, Susant Acharya1, Saurabh Bose1, Matthew Pike1, Edoardo Galli1, Matthew Arnold2, Simon Brown1

1University of Canterbury, School of Physical and Chemical Sciences, Christchurch, New Zealand; 2University of Technology Sydney, School of Mathematical and Physical Sciences, Sydney, Australia

Correspondence: Josh Mallinson (josh.mallinson@pg.canterbury.ac.nz)

BMC Neuroscience 2020, 21(Suppl 1):P14

Neuronal avalanches are one of the key characteristic features of signal propagation in the brain [1]. These avalanches originate from the complexity of the network of neurons and synapses, which are widely believed to form a self-organised critical system. Criticality is hypothesised to be intimately linked to the brain’s computational power [2,3] but efforts to achieve neuromorphic computation have so far focused on highly organised architectures, such as integrated circuits [4] and regular arrays of memristors [5]. To date, little attention has been given to developing complex network architectures that exhibit criticality and thereby maximise [6] computational performance. We show here, using methods developed by the neuroscience community [7], that electrical signals from self-organised percolating networks of nanoparticles [8] exhibit brain-like correlations and criticality [9]. Specifically, the sizes and durations of avalanches of switching events are power-law distributed, and the power-law exponents satisfy rigorous criteria for criticality. Additionally we show that both the networks and their dynamics are scale-free. These networks provide a low-cost platform for computational approaches that rely on spatiotemporal correlations, such as reservoir computing, and are a significant step towards creating neuromorphic device architectures.

References

  1. 1.

    Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. Journal of Neuroscience Research. 2003; 23: 11167-11177.

  2. 2.

    Munoz MA. Colloquium: criticality and dynamical scaling in living systems. Reviews of Modern Physics. 2018; 90(3): 031001.

  3. 3.

    Cocchi L, Gollo LL, Zalesky A, Breakspear M. Criticality in the brain: a synthesis of neurobiology, models and cognition. Progress in Neurobiology. 2017; 158: 132-152.

  4. 4.

    Merolla PA, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science. 2014; 345(6197): 668-673.

  5. 5.

    Burr GW, et al. Neuromorphic computing using non-volatile memory. Advances in Physics: X. 2017; 2(1), 89-124.

  6. 6.

    Srinivasa N, Stepp ND, Cruz-Albrecht J. Criticality as a set-point for adaptive behavior in neuromorphic hardware. Frontiers in Neuroscience. 2015; 9: 449.

  7. 7.

    Friedman N, et al. Universal critical dynamics in high resolution neuronal avalanche data. Physical Review Letters. 2012; 108(20): 208102.

  8. 8.

    Sattar A, Fostner S, Brown SA. Quantized conductance and switching in percolating nanoparticle films. Physical Review Letters. 2013; 111(13): 136808.

  9. 9.

    Mallinson JB, et al. Avalanches and criticality in self-organised nanoscale networks. Science Advances. 2019; 5(11): eaaw8438.

P15 Learning sequences of correlated patterns in recurrent networks

Subhadra Mokashe1, Nicolas Brunel2

1Duke University, Neurobiology, Durham, North Carolina, United States of America; 2Duke University, Neurobiology and Physics, Durham, North Carolina, United States of America

Correspondence: Subhadra Mokashe (subhadramokashe@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P15

What determines the format of memory representations in cortical networks is a subject of active research. During memory tasks, the retrieval of stored memories is characterized either by the persistent elevation in the firing rate of a set of neurons (‘persistent activity’) [1] or by ordered transient activation of different sets of neurons (‘sequential activity’) [2]. Multiple theoretical studies have shown that temporally symmetric Hebbian learning rules give rise to fixed point attractor representation of memory (e.g., [3] and references therein), while temporally asymmetric learning rules lead to a dynamic sequential representation of memories (e.g., [4] and references therein). These studies assume that inputs to the network during learning have no temporal correlations.

The sensory information received by brain networks is likely to be temporally correlated. We study temporally asymmetric Hebbian learning rules in a recurrent network of rate-based neurons in the presence of temporal correlations in the inputs and characterize how the inputs shape the network dynamics and memory representation using both numerical simulations and mean-field analysis. We show that the network dynamics depend on the temporal correlations in the input stream the network receives. For inputs with short correlation timescale, the network exhibits sequential activity (Fig. 1 A left), while for longer correlations within the stream of input, the network settles into a fixed point attractor during retrieval (Fig. 1A right). At intermediate value of correlations, the network partially traverses the input sequence before settling into an attractor state (Fig. 1 A middle). We find that correlations increase the sequential memory capacity of the network. Non-linear learning rules increase the range of timescale of correlation for which the networks represent the memories as sequential activity in the network (Fig. 1 B). We also show that the network maintains a sequential representation, both in the case of sequences of discrete patterns and in the continuum limit (Fig. 1 C). Our work thus suggests that the correlation time scales of inputs at the time of learning have a strong influence on the nature of network dynamics during retrieval.

Fig. 1
figure15

A The activity of neurons (top) and overlaps with stored patterns from the simulations and mean-field theory (bottom), for short, intermediate, and long correlation timescale (L to R). Probability of sequence retrieval with a linear learning and non-linear learning rule as a function of correlation B and as a function of tau_OU, for different discretizations of a continuous OU process C

References

  1. 1.

    Funahashi S, Bruce CJ, Goldman-Rakic PS. Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex. Journal of Neurophysiology. 1989; 61(2): 331–349.

  2. 2.

    Harvey CD, Coen P, Tank DW. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature. 2012; 484(7392): 62-68.

  3. 3.

    Pereira U, Brunel N. Unsupervised learning of persistent and sequential activity. Frontiers in Computational Neuroscience. 2020;13: 97.

  4. 4.

    Gillett M, Pereira U, Brunel N. Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning. BioRxiv. 2019; 818773.

P16 A computational model for time cell-based learning of interval timing

Sorinel Oprisan1, Tristan Aft1, Michael Cox2, Mona Buhusi3, Catalin Buhusi3

1College of Charleston, Department of Physics and Astronomy, Charleston, South Carolina, United States of America; 2Clemson University, Mechanical Engineering, Clemson, South Carolina, United States of America; 3Utah State University, Psychology, Logan, Utah, United States of America

Correspondence: Sorinel Oprisan (oprisans@cofc.edu)

BMC Neuroscience 2020, 21(Suppl 1):P16

Lesion and pharmacological studies found that interval timing is the emergent property of an extensive neural network that includes the prefrontal cortex (PFC), the basal ganglia (BG), and the hippocampus (HIP). We used our Striatal Beat Frequency (SBF) model with a large number of PFC oscillators to produce beats from the coincidence detection performed by BG [1,2]. The response of the PFC-BG neural network provides an output that (1) accurately identifies the criterion time, i.e., the time at which the reinforcement was presented during reinforced trails, and (2) is scalar, i.e., the prediction error is proportional to the criterion time. We found that, although the PFC-BG can create beats, the accuracy of the timing depends on the number of PRC oscillators and the frequency range they cover [3,4].

The ability to discriminate between multiple durations requires a metric space in which durations can be compared. We hypothesized that time cells, which were recently discovered in the hippocampus and ramp-up their firing when the subject is at a specific temporal marker in a behavioral test, can offer a time base for interval timing. We expanded the SBF model by incorporating the HIP time cells that (1) provide a natural time base, and (2) could be the cellular root of the scalar property of interval timing observed in all behavioral experiments (see [5]). Our model of interval timing learning assumes that there are two stages of this process. First, during the reinforced trials, the subject learns the boundaries of the temporal duration. This process is similar to the HIP space cell activity that first forms an accurate spatial map of the edges of the environment. Subsequently, the time cells are recruited to cover the entire to-be-timed duration uniformly. Without any learning rule, i.e., without any feedback from the PFC-BG network, the population of time cells simply produces a uniform average time field. In our computational model, the learning rule requires the HIP time cell to adjust their activity to mirror the output of the PFC-BG network. A plausible mechanism for the modulation of HIP time cell activity could involve dopamine released during the reinforced trials. We tested numerically different learning rules and found that one of the most efficient in terms of the number of trails required until convergence is the diffusion-like, or nearest-neighbor, algorithm.

References

  1. 1.

    Oprisan SA, Aft T, Buhusi M, Buhusi C. Scalar timing in memory: A temporal map in the hippocampus. Journal of Theoretical Biology. 2018; 438: 133 – 142.

  2. 2.

    Oprisan SA, Buhusi M, Buhusi CV. A population-based model of the temporal memory in the hippocampus. Frontiers in Neuroscience. 2018; 12: 521.

  3. 3.

    Buhusi CV, Oprisan SA, Buhusi M. Clocks within Clocks: timing by Coincidence Detection. Current Opinion in Behavioral Sciences. 2016; 8: 207-213.

  4. 4.

    Buhusi CV, Reyes MB, Gathers CA, Oprisan SA, Buhusi M. Inactivation of the Medial-Prefrontal Cortex Impairs Interval Timing Precision, but Not Timing Accuracy or Scalar Timing in a Peak-Interval Procedure in Rats. Frontiers in Integrative Neuroscience. 2018; 12: 20.

  5. 5.

    Oprisan SA, Buhusi CV. What is all the noise about in interval timing? Philosophical Transactions of the Royal Society B: Biological Sciences. 2014; 369(1637): 20120459.

P17 The impacts of the connectome on coupled networks of wilson-cowan models with homeostatic plasticity

Wilten Nicola1, Sue A Campbell2

1University of Calgary, Calgary, Canada; 2University of Waterloo, Applied Mathematics, Waterloo, Canada

Correspondence: Wilten Nicola (wilten.nicola@ucalgary.ca)

BMC Neuroscience 2020, 21(Suppl 1):P17

We study large networks of Wilson-Cowan neural field systems with homeostatic plasticity. These networks have been known to display rich dynamical states such consisting of a single recurrently coupled, or two cross-coupled nodes [1]. These dynamics include chaos, mixed mode oscillations and chaos, and synchronized chaos, even under these simple connectivity profiles in small networks. Here, we consider these networks with connectomes that display so-called L1 normalization but are otherwise arbitrary under large network limits. We find that for the majority of classical connectomes considered (Random, Small World), the network displays a large-scale chaotic synchronization to the attractor states and bifurcation sequence of a single recurrently coupled node as in [1]. However, connectomes that display sufficiently large pairs of eigenvalues can trigger multiple Hopf bifurcations which can potentially collide in Torus bifurcations that can destabilize the synchronized, single node attractor solutions. Our analysis demonstrates that for Wilson-Cowan systems with homeostatic plasticity, the dominant determinant of network activity is not the connectome directly, but rather the connectome’s ability to generate large eigenvalues that can induce multiple nearby Hopf bifurcations. If the connectome cannot generate these large pairs of eigenvalues, the dynamics of the network considered become limited to the dynamics of a single recurrently coupled node.

Reference

  1. 1.

    Nicola W, Hellyer PJ, Campbell SA, Clopath C. Chaos in homeostatically regulated neural systems. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2018; 28(8): 083104.

P18 Inhibitory gating in the dentate gyrus

Claudio Mirasso1, Cristian Estarellas1, Santiago Canals2

1Universitat de les Illes Balears, Instituto de Física Interdisciplinar y Sistemas Complejos, Palma de Mallorca, Spain; 2Instituto de Neurociencias, San Juan de Alicante, Spain

Correspondence: Claudio Mirasso (claudio@ifisc.uib-csic.es)

BMC Neuroscience 2020, 21(Suppl 1):P18

Electrophysiological recordings have demonstrated a tight inhibitory control of hilar interneurons over Dentate Gyrus granule cells (DGgc) [1,2]. This excitation/inhibition balance is crucial for information transmission [3] and likely relies on inhibitory synaptic plasticity [4]. Our experiments show that LTP induction in the Perforant Pathway (PP) not only potentiates glutamatergic synapses, but unexpectedly decreases feed-forward inhibition in the DG, facilitating activity propagation in the circuit and modifying the long-range connectivity in the brain. To investigate this phenomenon, we propose to study a circuit of populations of point neurons described by the Izhikevich model. The model contains entorhinal cortex (EC) neurons, DGgc, mossy cells, basket cells and hilar interneurons. The proportion of neurons per population and the connectivity of the neural network is based on anatomical published data and is fitted to achieve experimental electrophysiological in vivo recordings [2]. The study of the effect of LTP in the local circuit of the DG is performed in the model adapting synaptic weights in the EC projections. The results obtained from the model, before and after LTP induction, support the counterintuitive experimental observation of synaptic depression in the feed-forward inhibitory connection induced by LTP. We show that LTP increases the efficiency of the glutamatergic input to recruit the inhibitory network, resulting in a reciprocal cancellation of the basket cell population activity. We validate the result of the model by electrophysiological experiments inducing LTP in the PP of anaesthetized mice in vivo and recording excitatory and inhibitory currents in vitro in the same animals. Overall, our findings suggest that LTP of the EC input increases the excitation/inhibition balance, and facilitates activity propagation to the next station in the circuit by recruiting an interneuron-interneuron network that inhibits the tight control of basket cells over DGgc firing.

References

  1. 1.

    Bragin A, et al. Gamma (40-100 Hz) oscillation in the hippocampus of the behaving rat. Journal of Neuroscience. 1995; 15(1): 47-60.

  2. 2.

    Pernía-Andrade AJ, Jonas P. Theta-gamma-modulated synaptic currents in hippocampal granule cells in vivo define a mechanism for network oscillations. Neuron. 2014; 81(1): 140–152.

  3. 3.

    Bartos M, Vida I, Frotscher M, Jörg G, Jonas P. Rapid Signaling at Inhibitory Synapses in a Dentate Gyrus Interneuron Network. Journal of Neuroscience. 2001; 21(8): 2687-2698.

  4. 4.

    Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science. 2011; 334(6062): 1569-73.

P19 Neuroscience gateway enabling modeling and data processing using high performance and high throughput computing resources

Amitava Majumdar1, Subhashini Sivagnanam1, Kenneth Yoshimoto1, Dave Nadeau1, Martin Kandes1, Trever Petersen1, Ramon Martinez2, Dung Troung2, Arnaud Delorme2, Scott Makeig2, Ted Carnevale3

1University of California, San Diego, San Diego Supercomputer Center, La Jolla, California, United States of America; 2University of California, San Diego, Institute for Neural Computation, La Jolla, California, United States of America; 3Yale University, Neurobiology, New Haven, Connecticut, United States of America

Correspondence: Amitava Majumdar (majumdar@sdsc.edu)

BMC Neuroscience 2020, 21(Suppl 1):P19

The Neuroscience Gateway (NSG) has been serving the computational neuroscience community since early 2013. Its initial goal was to reduce technical and administrative barriers that neuroscientists face in accessing and using high performance computing (HPC) resources needed for large scale neuronal modeling projects. For this purpose, NSG provided tools and software that require and run efficiently on HPC resources available as a part of the US XSEDE (Extreme Science and Engineering Discovery Environment) program that coordinates usage of academic supercomputers. Since around 2017 experimentalists such as cognitive neuroscientists, psychologists and biomedical researchers started to use NSG for their neuroscience data processing, analysis and machine learning work. Data processing workloads are more suitable on high throughput computing (HTC) resources that are suitable for single core jobs typically run to process individual data sets of subjects. Machine learning (ML) workloads require use of GPUs for well-known ML frameworks such as TensorFlow. NSG is adapting to respond to the needs of experimental neuroscientists by providing HTC resources, in addition to already enabling successfully the computational neuroscience community for many years by providing HPC resources. Data processing focused work of experimentalists also require NSG to add various data functionalities, such as ability to transfer/store large data to/on NSG, validate the data, process same data by multiple users, publish final data products, visualize the data, search the data etc. These features are being add to NSG currently. Separately there is a demand from the neuroscience community to make NSG an environment where neuroscience tool developers can test, benchmark, and scale their newly developed tools and eventually disseminate their tools via the NSG for neuroscience users.

The poster will describe NSG from its beginning and how it is evolving for the future needs of the neuroscience community such as: (i) NSG has been successfully serving primarily the computational neuroscience community, as well as some data processing focused neuroscience researchers, until now; (ii) new features are added to make it a suitable and efficient dissemination environment for lab-developed neuroscience tools. These will allow tool developers to disseminate their lab-developed tools on NSG taking advantage of the current functionalities that are being well served on NSG for the last seven years such as a growing user base, an easy user interface, an open environment, the ability to access and run jobs on powerful compute resources, availability of free supercomputer time, a well-established training and outreach program, and a functioning user support system. All of these well-functioning features of NSG will make it an ideal environment for dissemination and use of lab-developed computational and data processing neuroscience tools; (iii) NSG is being enhanced such that it can have more seamless access to HTC resources provided by the Open Science Grid (OSG) and commercial cloud. This will allow data processing and machine learning oriented workloads to be able to take advantage of HTC and cloud resources including GPUs; (iv) New data management features are being added to NSG and these include the ability to transfer/upload large data, validate uploaded data, share and publish data etc.

P20 Effects of dopamine on networks of barrel cortex

Fleur Zeldenrust1, Chao Huang2, Prescilla Uijtewaal3, Bernhard Englitz1, Tansu Celikel1

1Donders Institute for Brain, Cognition and Behaviour, Radboud University, Department of Neurophysiology, Nijmegen, Netherlands; 2University of Leipzig, Department of Biology, Leipzig, Germany; 3UMC Utrecht, Imaging Division, Department of Radiotherapy, Utrecht, Netherlands

Correspondence: Fleur Zeldenrust (fleurzeldenrust@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P20

The responses of excitatory pyramidal cells and inhibitory interneurons in cortical networks are shaped by each neuron’s place in the network (connectivity of the network) and its biophysical properties (ion channel expression [1]), which are modulated by top-down neuromodulatory input, including dopamine. Using a recently developed ex vivo method [2], we showed that the activation of the D1 receptor (D1R) increases the information transfer of fast spiking, but not regular spiking, cells, by decreasing their threshold [3]. Moreover, we showed that these differences in neural responses are accompanied by faster decision-making on a behavioural level. However, how the single-cell changes in spike responses result in these behavioural changes is still unclear. Here, we aim to bridge the gap between behavioural and single cell effects by considering the effects of D1R activation on a network level.

We took a 3-step approach and simulated the effects of dopamine by lowering the thresholds of inhibitory but not excitatory neurons:

  1. 1)

    Network construction. We created a balanced network of L2/3 and L4 of the barrel cortex, consisting of locally connected integrate-and-fire neurons. We reconstructed the somatosensory cortex in soma resolution ([4], Fig. 1A), and adapted the number and ratio of excitatory and inhibitory neurons and the number of thalamic inputs accordingly.

  2. 2)

    Activity of the balanced state. The adaptations in the neural populations and connectivity resulted in a heterogeneous asynchronous regime [5] in L2/3, with highly variable single-neuron firing rates and suggesting a functional role of stimulus separation, and a ‘classical’ asynchronous regime in L 4, with more constant firing rates and suggestive of an information transmission role (Fig. 1B).

  3. 3)

    Functional effects. We used a spike-based FORCE learning [6,7] application, trained on either a gap-crossing task (data from [8]) or on a pole detection task (publicly available data from [9], Fig. 1C). We compared the results against a benchmark test consisting of a 3-layer deep neural net with a recurrent layer.

Fig. 1
figure16

A Density of identified cellular populations across the six cortical layers. B Effects of the relative number of inhibitory neurons on the asynchronous state. Yellow indicates classical asynchronous (CA) dynamics, purple indicates a heterogeneous asynchronous (HA) regime, white indicates no asynchronous irregular state. C Spiking FORCE learning during a pole localization task

References

  1. 1.

    Azarfar A, Calcini N, Huang C, Zeldenrust F, Celikel T. Neural coding: A single neuron’s perspective. Neuroscience & Biobehavioral Reviews. 2018; 94: 238-247.

  2. 2.

    Zeldenrust F, de Knecht S, Wadman WJ, Denève S, Gutkin B. Estimating the information extracted by a single spiking neuron from a continuous input time series. Frontiers in Computational Neuroscience. 2017; 11: 49.

  3. 3.

    Calcini N, et al. Cell-type specific modulation of information transfer by dopamine. Cosyne Abstract 2019, Lisbon

  4. 4.

    Huang C, Zeldenrust F, Celikel T. DepartmentofNeurophysiology/Cortical-representation-of-touch-in-silico. GitHub. 2019; https://github.com/DepartmentofNeurophysiology/Cortical-representation-of-touch-in-silico (accessed March 2, 2020).

  5. 5.

    Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature Neuroscience. 2014; 17: 594–600.

  6. 6.

    Sussillo D, Abbott LF. Generating Coherent Patterns of Activity from Chaotic Neural Networks. Neuron. 2009; 63: 544–57.

  7. 7.

    Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nature Communications. 2017; 8: 1–15.

  8. 8.

    Azarfar A, et al. An open-source high-speed infrared videography database to study the principles of active sensing in freely navigating rodents. GigaScience. 2018; 7(12); giy134.

  9. 9.

    Peron SP, Freeman J, Iyer V, Guo C, Svoboda K. A cellular resolution map of barrel cortex activity during tactile behavior. Neuron. 2015; 86(3); 783-799.

P21 Synergy in the encoding of a moving bar by a retina

Kuan H Chen1, Qi-Rong Lin2, Chi K Chen2

1Academia Sinica, Taiwan; 2Academia Sinica, Institute for Physics, Taiwan

Correspondence: Kuan H Chen (nghdavid123@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P21

To produce timely responses, animals must conquer delays from visual processing pathway by predicting motion. Previous studies [1] revealed that predictive information of motion is encoded in spiking activities of retinal ganglion cells (RGCs) early in the visual path. In order to study the predictive properties of a retina in a more systematic manner, stimuli in the form of a stochastic moving bar are used in experiments with retinas from bull frogs in a multi-electrode system. Trajectories of the bar are produced by Ornstein-Uhlenbeck (OU) processes with different time correlations (memories) induced by a butter-worth low-pass filter with various cut-off frequencies.

We then investigated the predictive properties of single RGC by calculating the time shifted mutual information (MI(x,r;δt)) between spiking output from RGCs and the bar trajectories. Intuitively, the peak position of MI(δt) is typically negative when considering the processing delay of the retina. Our measured peak positions of MI(δt) for some RGCs were characterized by both positive and negative peak position under low-pass OU (LPOU) stimulus. This finding indicates that some RGCs (P-RGCs) are predictive while the others are non-predictive (NP-RGCs). For LPOU with various correlation times, the MI peaks from the P-RGCs are positively correlated with the correlation times of the stimuli while those from the NP-RGCs are always around a fixed negative number (-50ms).

Furthermore, we apply principle component analysis [2] on the waveforms of stimuli preceding each of the neuron’s spike (spike triggered stimuli) to separate spikes into two clusters according to whether their projections to principle component are negative or positive. We find that predictive information can be extracted from the apparent non-predictive NP-RGCs when MI(δt) is obtained with spikes from each cluster. This last finding suggests that spikes from a single RGC might have different origins. Since the responses (r) from RGCs can carry information for both position (x) and velocity (v) of the moving bar, we have also performed partial information decomposition [3] for the mutual information between r and the combined state {x,v} which can be written as I[(x,v):r] = S + Ur + Ux + R where Ur and Ux are the unique contribution from x and v respectively while similarly R and S are the redundant and synergy contribution. We find that synergy from x and v is needed to produce anticipation. A simple spikes generation model with synergy from x and v is constructed to understand our experimental data.

Fig. 1
figure17

Two kinds of responses under low-passed-OU stimulation with different correlation time: Predicting cells (P, solid line) and Non-predicting cells (N, dot line)

References

  1. 1.

    Palmer SE, Marre O, Berry MJ, Bialek W. Predictive information in a sensory population. Proceedings of the National Academy of Sciences. 2015; 112(22): 6908-6913.

  2. 2.

    Geffen MN, de Vries SEJ, Meister M. Retinal Ganglion Cells Can Rapidly Change Polarity from Off to On. PloS Biology. 2007; 5(3): e65.

  3. 3.

    Williams PL, Beer RD. Nonnegative Decomposition of Multivariate Information. Arxiv. 2010; 1004.2515 [preprint].

P22 Retina as a negative group delay filter

PoYu Chou1, Chi K Chen2

1Academia Sinica, Taiwan; 2Academia Sinica, Institute for Physics, Taiwan

Correspondence: PoYu Chou (az6520555@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P22

Anticipation is important for living organisms to survive. The anticipative dynamics is found in retina [1], which can compensate for the delay of visual signals during transmission and processing. Responses of retinas (r(t)) from bull frogs have been investigated for anticipative properties in a multi-electrode array system by using whole field stochastic stimulations (S(t)) generated by an Ornstein–Uhlenbeck (OU) process. Time correlated S(t) can be then created by passing the OU signal through a low pass filter with various cutoff frequencies. Anticipative properties of the elicited spikes from the retinas are then characterized by the method of time lag mutual information (TLMI) between r(t) and S(t) [1]. We find that only stimulations with long enough correlations can elicit anticipative responses from the retinas; similar to the finding of [1] in which information in the stimulation was time coded. However, information is being rate coded in the present experiment. Recently, Voss [2] proposed that a negative group delay filter can produce anticipative response to low-pass filtered random signals. To test this idea, it is shown that an NGD filter with appropriate parameters can indeed be used to produce TLMI from S(t) similar to those observed in experiments. Furthermore, experiments with dark and bright Gaussian light pulses further confirmed that retina can be considered as an NGD filter; but only for the dark pulses. This last finding and the NGD capability of the retina suggest that there is a delayed negative feedback in the off-pathway of the retina. In fact, a two neuron-model with delayed negative feedback can be shown to produce properties of an NGD filter. Presumably, such mechanism might also exist in a retina.

References

  1. 1.

    Chen KS, Chen CC, Chan CK. Characterization of predictive behavior of a retina by mutual information. Frontiers in Computational Neuroscience. 2017; 11: 66.

  2. 2.

    Voss HU. Signal prediction by anticipatory relaxation dynamics. Physical Review E. 2016; 93(3): 030201.

P23 Closed-form approximations of single-channel calcium nanodomains in the presence of cooperative calcium buffers

Victor Matveev, Yinbo Chen

New Jersey Institute of Technology, Department of Mathematical Sciences, Newark, United States of America

Correspondence: Victor Matveev (matveev@njit.edu)

BMC Neuroscience 2020, 21(Suppl 1):P23

Calcium ion (Ca2+) elevations produced in the vicinity of single open Ca2+ channels are termed Ca2+ nanodomains, and play an important role in triggering secretory vesicle exocytosis, myocyte contraction, and other fundamental physiological processes. Ca2+ nanodomains are shaped by the interplay between Ca2+ influx, Ca2+ diffusion and its binding to Ca2+ buffers, which absorb most of the Ca2+ entering the cell during a depolarization event. In qualitative studies of local Ca2+ signaling, the dependence of Ca2+ concentration on the distance from the Ca2+ channel source can be approximated with a reasonable accuracy by analytic approximations of quasi-stationary solutions of the corresponding reaction-diffusion equations. Such closed-form approximations help to reveal the qualitative dependence of nanodomain characteristics on Ca2+ buffering and diffusion parameters, without resorting to computationally expensive numerical simulations. Although a range of nanodomain approximations had been developed for the case of Ca2+ buffers with a single Ca2+ binding site, for example the Rapid Buffer Approximation, the Excess Buffer Approximation, and the Linear approximation [1,2], most biological buffers have more complex Ca2+-binding stoichiometry. Further, several important Ca2+ buffers and sensors such as calretinin and calmodulin consist of distinct EF-hand domains, each possessing two Ca2+ binding sites exhibiting significant cooperativity in binding, whereby the affinity of the second Ca2+ binding reaction is much higher compared to the first binding reaction. To date, only the Rapid Buffer Approximation (RBA) has been generalized to Ca2+ buffers with two binding sties [3]. However, the performance of RBA in the presence of cooperative Ca2+ buffers is limited by the complex interplay between the condition of slow diffusion implied by the RBA, and the slow rate of the first Ca2+ binding reaction characterizing cooperative Ca2+ binding. To resolve this problem, we present modified versions of several Ansatze recently introduced for the case of simple buffers [4], extending them to the case of Ca2+ buffers with 2-to-1 stoichiometry. These new approximants interpolate between the short-range and long-range distance-dependence of Ca2+ nanodomain concentration using a combination of rational and exponential functions. We examine in detail the parameter-dependence of the approximation accuracy, and show that this method is superior to RBA for a wide ranges of buffering parameter values. In particular, the new approximants accurately estimate the distance-dependence of Ca2+ concentration in the case of calretinin or calmodulin.

Acknowledgements: Supported in part by NSF DMS-1517085 (V.M).

References

  1. 1.

    Naraghi M, Neher E. Linearized buffered Ca2+ diffusion in microdomains and its implications for calculation of [Ca2+] at the mouth of a calcium channel. Journal of Neuroscience. 1997; 17: 6961- 6973.

  2. 2.

    Smith GD, Dai L, Miura RM, Sherman A. Asymptotic Analysis of Buffered Calcium Diffusion Near a Point Source. SIAM Journal on Applied Mathematics. 2001; 6: 1816-1838.

  3. 3.

    Matveev V. Extension of Rapid Buffering Approximation to Ca2+ Buffers with Two Binding Sites. Biophysical Journal. 2018; 114: 1204-1215.

  4. 4.

    Chen Y, Muratov C, Matveev V. Efficient approximations for stationary single-channel Ca2+ nanodomains across length scales. bioRxiv 2020.

P24 A bump-attractor spiking neural network for motor adaptation and washout based on norepinephrine release in primary motor cortex

Mariia Popova1, Melanie Tschiersch1, Nicolas Berberich2, Stefan K Ehrlich2, David Franklin3, Gordon Cheng2

1Technical University of Munich, Department of Electrical and Computer Engineering, Munich, Germany; 2Technical University of Munich, Institute for Cognitive Systems, Munich, Germany; 3Technical University of Munich, Neuromuscular Diagnostics, Munich, Germany

Correspondence: Mariia Popova (maria.popova@tum.de)

BMC Neuroscience 2020, 21(Suppl 1):P24

In order to examine the formation of predictive motor memories, typical behavioural motor learning experiments perturb participants reaching movements using an external force field, to which they rapidly adapt, and exhibit after effects when the force field is removed. During the force field adaptation trials (Fig. 1), subjects move their hand from the start position (red circle) to the target position (black x) while a force field perturbs the movement (strength and direction indicated by yellow arrows) causing the hand to move to the final position (green circle). After the force field is removed, a washout effect is observed. While previous computational models can recreate the behavioral results, they do not account for the neural mechanisms involved. A computational model including a synaptic mechanism can help to explain the processes involved in motor learning. For this reason, we developed a bump-attractor, spiking neuron model of primary motor cortex (M1) proposing a synaptic mechanism using reward-based neurotransmitter release to explain motor adaptation and washout.

Fig. 1
figure18

a Schematics of behavioural experiment. b Single trial simulation: Bump of neuronal activity in the first perturbed trial. c Adaptation results: Results of the simulations (blue) of error development over consecutive trials during adaptation and washout in comparison to behavioural data (green) from [1]

The developed model consists of directionally-tuned neurons, shown to exist in M1 in biology, that encode the hand position through average neural firing. The force field is modeled through a simulated, external current perturbing the neural activity in the direction of the force field. In biology, Norepinephrine is released from locus coreuleus to M1 when errors are detected in the visual pathway. Norepinephrine affects M1 in a goal-directed manner, increasing the excitatory synaptic responses in the so-called hotspot, which is determined by arousal. For the model to remain close to biology, adaptation is modeled through an error-dependent increase in excitatory to excitatory conductance in the target position within the M1 model, leading to a decrease of the perturbation on the stable bump of neural activity across trials. Washout is implemented through a shift of the hotspot and the accumulated Norepinephrine through a motor-coordinate system shift during force field removal. After the initial washout trial, the wrongful coordinate system shift is detected and Norepinephrine in the shifted hotspot decays.

The simulations from the proposed computational model qualitatively account for both: adaptation and washout as seen in comparison to the behavioural data from [1] (Fig. 1). Thus, the model suggests for the first time a biologically plausible synaptic mechanism in M1 that can explain the main features of motor learning of external dynamics.

Reference

  1. 1.

    Nozaki D, Kurtzer I, Scott S. Limited transfer of learning between unimanual and bimanual skills within the same limb. Nature Neuroscience. 2006; 9: 1364–1366.

P25 Increased glutamate metabolism in ACC after brief mindfulness training: a pilot MR spectroscopy study

Yiyuan Tang1, Pegah Askari2, Changho Choi2

1Texas Tech University, Lubbock, Texas, United States of America; 2University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, United States of America

Correspondence: Yiyuan Tang (yiyuan.tang@ttu.edu)

BMC Neuroscience 2020, 21(Suppl 1):P25

Mindfulness training (MT) involves paying attention to the present and increasing awareness of one’s thoughts and emotions without judgment and has become a promising intervention for promoting health and well-being. Neuroimaging studies have shown its beneficial effects on brain functional activity, connectivity, and structures [1-3]. A series of RCTs indicated that one form of MT, integrative body-mind training (IBMT) induces brain functional and structural changes in region related to self-control networks such as the anterior cingulate cortex (ACC) after 2-10 h of practice [1-3]. However, whether MT could change brain metabolism in the ACC remains unexplored. Utilizing a non-invasive proton magnetic resonance spectroscopy (MRS), we conducted the first pilot study investigating whether brief IBMT could change the excitatory and inhibitory responses of neurotransmitters within the ACC [1-3]. Nine healthy college students completed ten 1-hour IBMT sessions within 2-week and brain metabolism were assessed before and after using a 3T Siemens Prisma scanner. Following survey imaging and T1-weighted structural imaging, single-voxel point-resolved spectroscopy (PRESS) was conducted for estimating the metabolite concentrations in 2 regions - rostral and dorsal ACC based on prior literature [1-4]. PRESS scan parameters included TR 2s, TE 90 ms, sweep width 2.5 kHz, 1024 sampling points, and 256 signal averages. Water suppression and B0 shimming up to second order were performed with the vendor-supplied tools. Reference water signal was acquired for eddy current compensation, multi-channel combination, and metabolite quantification. Spectral fitting was performed with LCModel software [5], using in-house basis spectra of metabolites which were calculated incorporating the PRESS slice selective RF and gradient pulses. The spectral fitting was performed between 0.5-4.0 ppm. After correcting the LCModel estimates of metabolite signals for the T2 relaxation effects using published T2 values [6], the millimolar concentrations of metabolites were calculated with reference to water at 42 M. Paired t-tests were performed to examine changes. Results indicated a significant increase in glutamate metabolism (t = 3.24, p = 0.012), as well as Glx (glutamate + glutamine) (t = 2.44, p = 0.041) in the rostral ACC. Results indicate that MT may not only increase ACC activity, but also may induce neurochemical changes in regions of self-control networks, suggesting a potential mechanism of MT’s effects on disorders such as addiction and schizophrenia, which often involve the dysfunction of glutamatergic system (i.e. lower glutamate metabolism).

References

  1. 1.

    Tang YY, Holzel BK, Posner MI. The neuroscience of mindfulness meditation. Nature Reviews Neuroscience. 2015; 16: 213-22

  2. 2.

    Tang YY. The Neuroscience of Mindfulness Meditation: How the Body and Mind Work Together to Change Our Behavior? Springer Nature. 2017.

  3. 3.

    Tang YY, Tang R, Gross JJ. Promoting emotional well-being through an evidence-based mindfulness training program. Frontiers in Human Neuroscience. 2019;13: 237.

  4. 4.

    Yang S, et al. Lower glutamate levels in rostral anterior cingulate of chronic cocaine users - A (1)H-MRS study using TE-averaged PRESS at 3T. Psychiatry Research. 2009; 174: 171-6.

  5. 5.

    Provencher SW. Estimation of metabolite concentrations from localized in vivo proton NMR spectra. Magnetic Resonance in Medicine. 1993; 30: 672–679.

  6. 6.

    Ganji SK, et al. T2 measurement of J‐coupled metabolites in the human brain at 3T. NMR in Biomedicine. 2012; 25(4): 523-529.

P26 Learning recurrent dynamic patterns in spiking neural networks

Christopher Kim, Carson Chow

National Institutes of Health, Laboratory of Biological Modeling, NIDDK, Maryland, United States of America

Correspondence: Christopher Kim (chrismkkim@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P26

Understanding the recurrent dynamics of cortical circuits engaged in complex tasks is one of the central questions in computational neuroscience. Most of recent studies train the output of recurrent models to per- form cognitive or motor tasks and investigate if the recurrent dynamics emerging from task-driven learning can explain neuronal data. However, the possible range of recurrent dynamics that can be realized within a recurrent model after learning, particularly in a spiking neural network, is not well understood. In this study, we focus on investigating spiking network’s capability to learn recurrent dynamics and characterize the learning capacity in terms of network size, intrinsic synaptic decay time and target decay time. We find that, by modifying recurrent synaptic weights, spiking networks can generate arbitrarily complex recurrent patterns if 1) the target patterns can be produced self-consistently, 2) the synaptic dynamics are fast enough to track the targets, and 3) the number of neurons in the network is large enough for noisy postsynaptic currents to approximate the targets. We examine spiking network’s learning capacity analytically and corroborate the predictions by training spiking networks to learn arbitrary patterns and in-vivo cortical activity. Furthermore, we show that a trained network can operate in balanced state if the total excitatory and inhibitory synaptic weights to each neuron are constrained to preserve the balanced network structure. Under such synaptic constraints, the trained network generates spikes at the desired rate with large trial-to-trial variability and exhibits paradoxical features of inhibition-stabilized network.

These results show that spiking neural networks with fast synapses and a large number of neurons can generate arbitrarily complex dynamics. When learning is not optimal, our findings can suggest potential sources of learning errors. Moreover, networks can be trained in dynamic regime relevant to cortical circuits.

P27 Lessons from Artificial Neural Network for studying coding principles of Biological Neural Network

Hyojin Bae1, Chang-eop Kim2, Gehoon Chung3

1Gachon university, Gyeonggi-do, South Korea; 2Gachon university, College of Korean Medicine, Gyeonggi-do, South Korea; 3Seoul National University, Oral Physiology, Seoul, South Korea

Correspondence: Hyojin Bae (qogywls1573@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P27

An individual neuron or neuronal population is conventionally said to be “selective” to a feature of stimulus if they differentially respond to the feature. Also, they are considered to encode certain information if decoding algorithms successfully predict a given stimulus or behavior from the neuronal activity. However, an erroneous assumption about the feature space could mislead the researcher about a neural coding principle. In this study, by simulating several likely scenarios through artificial neural networks (ANNs) and showing corresponding cases of biological neural networks (BNNs), we point out potential biases evoked by unrecognized features i.e., confounding variable.

We modeled an ANN classifier with the open-source neural network library Keras, running Tensorflow as backend. The model is composed of five hidden layers, dense connections and rectified linear activation. We added a dropout layer and l2-regularizer on each layer to apply penalties on layer activity during optimization. The model was trained with CIFAR-10 dataset and showed a saturated test set accuracy at about 53% (the chance level accuracy = 10%). For a stochastic sampling of individual neuron’s activity from each deterministic unit, we generated the Gaussian distribution through modeling within-population variability according to each assumption.

Using this model, we showed 4 possible misinterpretation cases induced by a missing feature. (1) The researcher can choose the second-best feature which has similarity to ground truth feature. (2) An irrelative feature which correlated with ground truth feature can be chosen. (3) Evaluating decoder in incomplete feature space could result in the overestimation of the performance of the decoder. (4) Misconception about the receptive field of the unit could make a signal to be incorporated in noise.

In conclusion, we suggest that the comparative study of ANN and BNN from the perspective of machine learning can be a great strategy for deciphering the neural coding principle.

P28 Inhibitory gain allows transitions between integrated and segregated states: a neuromodulatory analysis from whole-brain models

Carlos Coronel1, Patricio Orio2, Rodrigo Cofré3

1Universidad de Valparaíso, Centro Interdisciplinario de Neurociencia de Valparaíso, Valparaíso, Chile; 2Universidad de Valparaíso, Instituto de Neurociencia, Valparaiso, Chile; 3Universidad de Valparaíso, Centro de Investigación y Modelamiento de Fenómenos Aleatorios, Valparaíso, Chile

Correspondence: Carlos Coronel (carlos.coronel@postgrado.uv.cl)

BMC Neuroscience 2020, 21(Suppl 1):P28

In the brain, at the macroscale level, two organizational principles participate in the processing of information: segregation and integration. While segregation allows the processing of information in specific brain regions, integration coordinates the activity of these regions to generate a behavioral response [1]. Recent studies suggest that the cholinergic system promotes segregated states and the noradrenergic system promotes integrated states, both measured using graph theoretical tools over the functional connectivity (FC) matrices [2,3]. We extended this neuromodulatory framework by including the noradrenergic system (filter gain), and the effect of the cholinergic system in the excitatory and inhibitory circuits separately (excitatory and inhibitory gain). The neuromodulatory framework was tested using the Jansen Rit neural mass model [4], built from real human structural connectivity matrices and heterogeneous transmission delays for long-range connections (Fig. 1). The fMRI-BOLD signals were simulated using a generalized hemodynamic function model, and features such as the global phase synchronization, oscillatory frequency and SNR were measured. On the other hand, FC matrices were built using pairwise Pearson’s correlation from the simulated BOLD signals. Thresholded FC matrices were analyzed with graph theoretical tools for computing segregation and integration. Our results suggest that functional integration is possible only with the suppression of the feedback excitation, mediated by the inhibitory gain, and follows a sigmoid or inverted U-shaped function, depending of the noise intensity levels. Also, the integration is accompanied by an increase in signal to noise ratio and regularity of EEG signals. The results suggest a mechanistic interpretation. We propose that the cholinergic system neuromodulation on the excitatory connections increases SNR locally, and the effect of that system on the inhibitory interneurons suppresses the local cortico-cortical transmission, increasing the responsivity of pyramidal neurons to stimulus from distant regions. Finally, the noradrenergic system coordinates long-range neural activity promoting integration. This framework constitutes a new set of tools and ideas to test how neural gain mechanisms mediate the balance between integration and segregation in the brain.

Fig. 1
figure19

Whole-brain neural mass model. A The model consists in a population of pyramidal neurons, and two populations of excitatory and inhibitory interneurons. B The N = 90 cortical columns are connected by real human structural connectivity matrices, with heterogeneous time-delays. C The cholinergic system operates with the parameters α and β, and the noradrenergic system with the parameter r0

References

  1. 1.

    Rubinov M, Sporns O. Complex network measures of brain connectivity: uses and interpretations. Neuroimage. 2010; 52(3): 1059-1069.

  2. 2.

    Shine JM, Aburn MJ, Breakspear M, Poldrack RA. The modulation of neural gain facilitates a transition between functional segregation and integration in the brain. Elife. 2018; 7: e31130.

  3. 3.

    Shine JM. Neuromodulatory influences on integration and segregation in the brain. Trends in Cognitive Sciences. 2019.

  4. 4.

    Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biological Cybernetics. 1995; 73(4): 357-366.

P29 High-order interdependencies and the functional balance of segregation-integration in the aging brain

Marilyn Gatica1, Rodrigo Cofre2, Fernando Rosas3, Pedro Mediano4, Patricio Orio5, Ibai Diez6, Stephan Swinnen7, Jesus Cortes8

1Universidad de Valparaíso and Biomedical Research Doctorate Program UPV Spain, CINV, Faculty of Sciences, Valparaíso, Chile; 2Universidad de Valparaiso, Institute of Mathematical Engineering, Valparaiso, Chile; 3Imperial College London, Department of Medicine, London, United Kingdom; 4Cambridge University, Department of Psychology, Cambridge, United Kingdom; 5Universidad de Valparaíso, Instituto de Neurociencia, Valparaiso, Chile; 6Gordon Center for Medical Imaging, Harvard Medical School, Department of Radiology, Boston, United States of America; 7KU Leuven, Research Center for Movement Control and Neuroplasticity, Department of Movement Sciences, Leuven, Belgium; 8Computational Neuroimaging Lab, IKERBASQUE: The Basque Foundation for Science, Biocruces Health Research Institute, Bilbao, Spain

Correspondence: Rodrigo Cofre (rodrigocofre@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P29

The interdependencies in the brain can be studied either from a structural/anatomical perspective (“structural connectivity”, SC) or by considering statistical interdependencies (“functional connectivity”, FC). While the SC is essentially pairwise (white-matter fibers start in a certain region and arrive at another), the FC is not, i.e., there is no reason to consider statistical interdependencies pairwise. A promising tool to study high-order interdependencies is the recently proposed O-Information [1]. This quantity captures the balance between redundancies and synergies in arbitrary sets of variables, thus extending the properties of the interaction information of three variables to larger sets. Redundancy is here understood as an extension of the conventional notion of correlation to more than two variables In contrast, synergy corresponds to an emergent statistical relationships that control the whole but not the parts.

In this study, we follow the seminal ideas introduced by Tononi, Sporns, and Edelman [2], which state that high brain functions might depend on the co-existence of integration and segregation. While the latter enables brain areas to perform specialized tasks independently of each other, the former serves to bind together brain areas towards an integrated whole for the purpose of goal-directed task performance. A key insight put forward in [2] is that segregation and integration can coexist and that this coexistence is measurable by assessing the high-order interactions of neural elements. We used the O-Information to investigate how high-order statistical interdependencies are affected by aging. For this, we analyzed fMRI data at rest from 164 healthy participants, ranging from 10 to 80 years old. Our results show an important increase in redundant interdependencies in the older population (age ranging from 60 to 80 years). Moreover, this effect seems to be pervasive, taking place at all interaction orders, suggesting a change in the balance of differentiation and integration towards more synchronized arrangements. Additionally, a redundant core of brain modules was observed, which decreased in size with age. The framework presented here and in detail in [3], provide novel insights into the aging brain revealing the role of redundancy in prefrontal and motor cortices in older participants, thus affecting basic functions such as working memory, executive and motor functions. This methodology may help to provide a better understanding of some brain disorders from an informational perspective, providing “info-markers”, that may lead to fundamental insights into the human brain in health and disease. The code to compute the metrics is available at [4].

Fig. 1
figure20

Method overview and main result. A 164 subjects, 2514 fMRI signals parcellated into 20 regions. B For each subject and interaction order we computed for each n-plet their O-Information. C Average over all 20 modules of redundancy and synergy for each of the age groups I1 (younger), I2, I3, I4 (oldest). The right panel shows that group differences in redundancy (represented by diamonds) using FDR

References

  1. 1.

    Rosas F, Mediano Pedro AM, Gastpar M, and Jensen HJ. Quantifying High-order Interdependencies via Multivariate Extensions of the Mutual Information. Physical Review E. 2019; 100: 032305.

  2. 2.

    Tononi G, Sporns O, Edelman GM. A measure for brain complexity: relating functional segregation and integration in the nervous system. Proceedings of the National Academy of Sciences. 1994; 91(11):5033-5037.

  3. 3.

    Gatica M, et al. High-order interdependencies in the aging brain. bioRxiv. 2020.

  4. 4.

    https://github.com/brincolab/High-Order-interactions.

P30 Biophysically realistic model of mouse olfactory bulb gamma fingerprint

Justas Birgiolas1, Richard Gerkin2, Sharon Crook3

1Arizona State University, Arizona, United States of America; 2Arizona State University, School of Life Sciences, Tempe, Arizona, United States of America; 3Arizona State University, School of Mathematical and Statistical Sciences, Tempe, Arizona, United States of America

Correspondence: Justas Birgiolas (jbirgio@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P30

The mammalian olfactory bulb is an intensively investigated system that is important in understanding neurodegenerative diseases. Insights gained from understanding the system also have important agricultural and national security applications. In this work, we developed a large-scale, biophysically, and geometrically realistic model of the mouse olfactory bulb and the gamma frequency oscillations [1] it exhibits. Model code, documentation, and tutorials are available atolfactorybulb.org.

The model consists of realistic mitral, tufted (excitatory), and granule (inhibitory) cell models whose electrophysiology and reconstructed morphology have been validated against experimental data using a suite of NeuronUnit [2] validation tests. The cell models were realistically placed, oriented, and confined within anatomically correct mouse olfactory bulb layers obtained from the Allen Brain Atlas [3] using features of BlenderNEURON software [4]. Dendritic proximity was used to form chemical and electrical synapses between principal and inhibitory cell dendrites. Glomeruli were stimulated using simulated odors obtained from optical imaging experiments. The local field potentials generated by the network were monitored and processed using wavelet analysis to replicate a gamma frequency pattern (fingerprint) consisting of an early-high, and later-low frequency temporal components. Simulations were performed using parallel-NEURON [5].

The network was subjected to computational manipulations, which revealed the critical importance of gap junctions, granule cell inhibition, and input strength differences between mitral and tufted cells in generating the gamma fingerprint. Specifically, at glomerular level, gap junctions synchronize the firing of mitral cell and tufted cell populations. Synchronized tufted cells activate granule cells, which inhibit mitral cells. Meanwhile, reduced afferent excitatory input results in mitral cell activation delay, which is amplified by tufted cell activated granule cell inhibition. The interaction between these three mechanisms results two clusters of activity seen in the gamma fingerprint (Fig. 1).

Fig. 1
figure21

The network model consists of biophysically realistic mitral, tufted, and granule cell models, which are positioned within reconstructed mouse olfactory bulb layers. The network is activated by simulated glomerular inputs. Local field potentials generated by the network are measured using an extracellular electrode. The network produces a two-cluster gamma fingerprint (lower right)

The results of the computational experiments support mechanistic hypotheses proposed in earlier experimental work and provide novel insights into the mechanisms responsible for olfactory bulb gamma fingerprint generation, which can be directly tested using common experimental preparations.

Acknowledgments: This work was funded in part by the National Institutes of Health through 1F31DC016811 to JB, R01MH1006674 to SMC, and R01EB021711 to RCG.

References

  1. 1.

    Manabe H, Mori K. Sniff rhythm-paced fast and slow gamma-oscillations in the olfactory bulb: Relation to tufted and mitral cells and behavioral states. Journal of Neurophysiology. 2013; 110(7): 1593–1599.

  2. 2.

    Gerkin RC, Birgiolas J, Jarvis RJ, Omar C, Crook SM. NeuronUnit: A package for data-driven validation of neuron models using SciUnit. bioRxiv. 2019; 665331.

  3. 3.

    Oh S, et al. A mesoscale connectome of the mouse brain. Nature. 2014; 508(7495): 207–214.

  4. 4.

    Birgiolas J. Towards Brains in the Cloud: A Biophysically Realistic Computational Model of Olfactory Bulb [Doctoral dissertation, Arizona State University].

  5. 5.

    Hines M, Carnevale N. The NEURON Simulation Environment. Neural Computation. 1997; 9(6): 1179–1209.

P31 Exploring fast and slow neural correlates of auditory perceptual bistability with diffusion-mapped delay coordinates

Pake Melland, Rodica Curtu

University of Iowa, Department of Mathematics, Iowa City, Iowa, United States of America

Correspondence: Rodica Curtu (rodica-curtu@uiowa.edu)

BMC Neuroscience 2020, 21(Suppl 1):P31

Perceptual bistability is a phenomenon in which an observer is capable of perceiving identical stimuli with two or more interpretations. The auditory streaming task has been shown to produce spontaneous switching between two perceptual states [1]. In this task a listener is presented a stream of tones, called triplets, with the pattern ABA– where A and B are tones with different frequencies and ‘–’ is a brief period of silence. The listener can alternate between two perceptual states: 1-stream in which the stimulus is integrated into a single stream, and 2-stream in which the stimulus is perceived as two segregated streams. In order to study the localization and dynamic properties of neural correlates of auditory streaming we collected electrocorticography (ECoG) data from neurosurgical patients while they listened to sequences of repeated triplets and self-reported switching between the two perceptual states.

It is necessary to find meaningful ways to analyze ECoG recordings, which are noisy and inherently high dimensional. Diffusion Maps is a non-linear dimensionality reduction technique which embeds high dimensional data into low dimensional Euclidean space [2]. The Diffusion Map method leverages the creation of a Markov matrix from a similarity measure on the original data. Under reasonable assumptions, the eigenvalues of the Markov matrix are positive and bounded above by 1. The largest eigenvalues along with their respective eigenvectors provide coordinates for an embedding of the data into d-dimensional Euclidean space. In [3] Diffusion Maps were used for a group level analysis of neural signatures during auditory streaming based on subject reported perception. We extend this approach by taking into account the time ordered property of the ECoG signals. For data that has a natural time ordering, it is beneficial to structure the data to emphasize its temporal dynamics; in [4] the authors develop the Diffusion- Mapped Delayed Coordinates (DMDC) algorithm. In this algorithm, time-delayed data is first created from general time series data; this initial step projects the data onto its most stable sub-system. The stable sub-system may remain in a high dimensional space, so they next apply Diffusion Maps to the time-delayed data which projects the (potentially high dimensional) stable sub-system onto a low dimensional representation adapted to the dynamics of the system.

We apply the DMDC algorithm to ECoG recordings from Heschl’s Gyrus in order to explore and reconstruct the underlying dynamics present during the auditory steaming task. We find that the eigenvalues obtained through the DMDC algorithm provide a way to uncover multiple time scales present in the underlying system. The corresponding eigenvectors form a Fourier-like basis that is adapted both to the fast properties of ECoG signal encoding the physical properties of the stimulus as well as a slow mechanism that corresponds to perceptual switching reported by subjects.

Acknowledgments: National Science Foundation, CRCNS grant 1515678, and The Human Brain Research Lab, University of Iowa, Iowa (Matthew A. Howard & Kirill Nourski).

References

  1. 1.

    Noorden LP. Temporal coherence in the perception of tone sequences [Doctoral dissertation, Eindhoven].

  2. 2.

    Coifman RR, Lafon S. Diffusion maps. Applied and Computational Harmonic Analysis. 2006; 21(1): 5-30.

  3. 3.

    Curtu R, Wang X, Brunton BW, Nourski KV. Neural signatures of auditory perceptual bistability revealed by large-scale human intracranial recordings. Journal of Neuroscience. 2019; 39(33): 6482-97.

  4. 4.

    Berry T, Cressman JR, Greguric-Ferencek Z, Sauer T. Time-scale separation from diffusion-mapped delay coordinates. SIAM Journal on Applied Dynamical Systems. 2013; 12(2): 618-49.

P32 Contribution of the Na/K pump to rhythmic bursting.

Ronald Calabrese1, Ricardo J E Toscano2, Parker J Ellingson2, Gennady Cymbalyuk2

1Emory University, Department of Biology, Atlanta, Georgia, United States of America; 2Georgia State University, Neuroscience Institute, Atlanta, Georgia, United States of America

Correspondence: Ronald Calabrese (biolrc@emory.edu)

BMC Neuroscience 2020, 21(Suppl 1):P32

The Na/K pump, often thought of as a background function in neuronal activity, contributes an outward current (IPump) that responds to the internal concentration of Na+([Na+]i). In bursting neurons, such as those found in central pattern generators (CPGs) that produce rhythmic movements, one can expect the [Na+]i and thus IPump to vary throughout the burst cycle [1-3]. This variation with electrical activity and the independence from membrane potential endow IPump with dynamical properties not available in channel-based currents (e.g. voltage- or transmitter- gated, or leak channels). Moreover, in many neurons the pump’s activity is modulated by a variety of modulators further expanding the potential role of IPump in rhythmic bursting activity [4]. Using a combination of experiment, modeling, and hybrid systems analyses, we have sought to determine how IPump and its modulation influence rhythmic activity in a CPG.

References

  1. 1.

    Rybak IA, Molkov YI, Jasinski PE, Shevtsova NA, Smith JC. Rhythmic bursting in the pre-Bötzinger complex: mechanisms and models. Progress in Brain Research. 2014; 209: 1-23.

  2. 2.

    Kueh D, Barnett WH, Cymbalyuk GS, Calabrese RL. Na(+)/K(+) pump interacts with the h-current to control bursting activity in central pattern generator neurons of leeches. Elife. 2016; 5: e19322.

  3. 3.

    Picton LD, Nascimento F, Broadhead MJ, Sillar KT, Miles GB. Sodium Pumps Mediate Activity-Dependent Changes in Mammalian Motor Networks. Journal of Neuroscience. 2017; 37(4): 906-921.

  4. 4.

    Tobin AE, Calabrese RL. Myomodulin increases Ih and inhibits the NA/K pump to modulate bursting in leech heart interneurons. Journal of Neurophysiology. 2005; 94(6): 3938-50.

P33 Temperature coding mechanisms in the cold-sensitive Drosophila neurons

Natalia Maksymchuk1, Akira Sakurai1, Jamin M Letcher1, Daniel N Cox1, Gennady Cymbalyuk1

1Georgia State University, Neuroscience Institute, Atlanta, Georgia, United States of America

Correspondence: Gennady Cymbalyuk (gcymbalyuk@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P33

Sensation of noxious cold stimuli plays an essential role in the survival of organisms. Adequate perception of certain characteristics of cold stimuli is necessary for appropriate behavioral responses for body protection. There exist primary cold-sensing neurons that encode distinct information about the cold thermal stimulus: the rate of temperature decrease and absolute temperature coding. Here, we focus on the roles of different ion channels in the cold temperature coding of the Drosophila larva. We investigate dynamics of the Class III (CIII) somatosensory neurons. They trigger a stereotypic cold-evoked behavior, a full-body contraction (CT). We combined computational neuroscience, genetic, and electrophysiological methods to develop a biophysical model of CIII neurons. Our computational model includes ionic currents implicated by transcriptomic data of ion channels expression in CIII neurons. We implement these currents using the gating characteristics of Drosophila Na+ and K+ channels obtained from the experimental literature using patch-clamp data [1,2]. We consider three subsystems (1) fast spike-generating subsystem, (2) moderately slow pattern-generating subsystem, and (3) slow thermotransduction subsystem. We investigated the role of these subsystems in temperature coding. Using the slow-fast decomposition approach, we isolated the fast spike-generating subsystem (a CIII reduced model). We systematically varied two parameters, the temperature (T) and Gleak, classified observed regimes of activity and mapped them on the plane (T,Gleak). The model exhibits a wide spectrum of regimes: spiking, bursting, and silenced. Analysis of a model including a moderately slow pattern-generating subsystem unveiled a slow bursting activity pattern of the CIII model. This regime requires the participation of Ca2+-activated K+ currents at noxious cold temperatures and an increased level of intracellular Ca2+. Similar burst and pause activity pattern were implicated in encoding noxious heat stimuli by the heat-sensitive Drosophila CIV neurons. We investigated a full model including the TRP channels representing the slow thermotransduction subsystem. We show that they play the key roles in coding of noxious cold temperature and encoding rate of temperature decrease. Our model qualitatively reproduces the temporal properties of the recordings from Trpm, Pkd2, and TRPA1 knock-down animals. In agreement with these experimental results, the corresponding models of TRP mutant CIII neurons [1] show decreased averaged cold-evoked firing rate and [2] diminished peak of firing rate in response to fast temperature change relative to controls. Studying the role of ion channels subsystems helped us classify possible regimes of activity and transitions between them, understand the complex dynamics of the cold-sensing neuron and putative mechanisms of encoding sensory information.

Acknowledgments: This research was supported by NIH grants 1R01NS115209 (DNC and GC) and 2R01NS086082 (DNC) and GSU B&B Grant (DNC).

References

  1. 1.

    Wang L, Nomura Y, Du Y, Dong K. Differential effects of TipE and a TipE-homologous protein on modulation of gating properties of sodium channels from Drosophila melanogaster. PLoS One. 2013; 8(7): e67551.

  2. 2.

    Hardie RC. Voltage-sensitive potassium channels in Drosophila photoreceptors. Journal of Neuroscience. 1991; 11(10): 3079-3095.

P34 Atlas-mapped reconstruction of the cerebellar Lingula

Alice Geminiani1, Dimitri Rodarie2, Robin De Schepper3, Claudia Casellato4, Egidio D’Angelo3

1University of Pavia, Pavia, Italy; 2École polytechnique fédérale de Lausanne, Blue Brain Project, Geneva, Switzerland; 3University of Pavia, Department of Brain and Behavioral Sciences, Pavia, Italy; 4University of Pavia, Department of Brain and Behavioral Sciences, Unit of Neurophysiology, Pavia, Italy

Correspondence: Alice Geminiani (alice.geminiani@unipv.it)

BMC Neuroscience 2020, 21(Suppl 1):P34

Large-scale neuronal networks are a powerful tool to investigate brain area functionality, embedding realistic temporal dynamics of neurons, synapses and microcircuits. However, spatial features differentiating regions within the same brain area are crucial for proper functioning of interconnected networks. In case of cerebellum, input signals are mapped and integrated in different regions with specific structural properties and local network dynamics [1].

Here we describe the reconstruction of the mouse Lingula (region of cerebellar Vermis) mapped on data from the Allen Brain Atlas as in [2], including neuron densities and orientation vectors in cubic voxels (25um side).

Network reconstruction was based on strategies of the cerebellar scaffold [3], with new features for folded volumes. The main cerebellar neurons were placed in the 3 layers of the cerebellar cortex, i.e. Granule and Golgi cells (GrC and GoC) in the Granular layer, Purkinje cells (PC) in the Purkinje layer, and Basket and Stellate cells in the Molecular layer. A particle placement strategy was used for Granular and Molecular neurons, in subvolumes made up of 50 voxels each. The algorithm included: placing the total number of neurons expected in each subvolume at random positions as repellent particles; iteratively re-positioning detected colliding particles within the subvolume; pruning particles placed outside. An adhoc algorithm was developed for PCs: in each parasagittal section, they were placed in parallel arrays at a minimum distance of 130um to avoid dendritic tree overlap, using A* search algorithm to find adjacent PC.

Orientation data were used for connectivity: each cell morphology was rotated based on orientation of the voxel containing the soma; connections were identified through an efficient algorithm searching for intersections of discretized rotated morphologies volumes. For connections from GrCs, parallel fibers were also bended to follow the orientation field. The resulting network included about 105 neurons and 107 connections (Fig. 1).

Fig. 1
figure22

Mapped reconstruction of the Lingula with the main cerebellar neurons. Rotated morphologies are reported for some PCs. The inset panel shows more in detail the parallel arrays of PCs with rotated sample morphologies

This proposed pipeline will be generalized to other cerebellar regions up to reconstruction of a full mouse cerebellum. The network, filled with point or detailed neuron/synapse models, will be used to investigate spatial features of signal propagation in the cerebellum, and the specialization and integration of sensory signals across different regions [4]. This will allow reproducing spatially-mapped experimental data, e.g. Local Field Potentials, and embedding the cerebellum in whole-brain frameworks built on atlases, in which multiple interconnected brain areas can be simulated even using different descriptions for each region (e.g. hybrid models with spiking circuits and mean-field components).

The workflow, here developed for the cerebellum, could be complementary to other algorithms, and applied to other brain areas.

Acknowledgements: This work has received funding from European Union’s Horizon 2020 Framework Programme for Research and Innovation under Grant Agreement N. 785907 (Human Brain Project SGA2) and SGA3.

References

  1. 1.

    Casali S, Marenzi E, Medini C, Casellato C, D’Angelo E. Reconstruction and simulation of a scaffold model of the cerebellar network. Frontiers in Neuroinformatics. 2019; 13: 37.

  2. 2.

    Erö C, Gewaltig MO, Keller D, Markram H. A cell atlas for the mouse brain. Frontiers in Neuroinformatics. 2018; 12: 84.

  3. 3.

    De Schepper, et al. A scaffold model for the cerebellum. [in prep -https://github.com/dbbs-lab/scaffold]

  4. 4.

    Apps R, et al. Cerebellar modules and their role as operational cerebellar processing units. The Cerebellum. 2018; 17(5): 654-82.

P35 Through synapses to spatial memory maps: a topological model

Yuri Dabaghian

The University of Texas McGovern Medical School, Neurology, Houston, Texas, United States of America

Correspondence: Yuri Dabaghian (yuri.a.dabaghian@uth.tmc.edu)

BMC Neuroscience 2020, 21(Suppl 1):P35

Learning and memory are fundamentally collective phenomena, brought into existence by highly organized spiking activity of large ensembles of cells. Yet, linking the characteristics of the individual neurons and synapses to the properties of large-scale cognitive representations remains a challenge: we lack conceptual approaches for connecting the neuronal inputs and outputs to the integrated results at the ensemble level. For example, numerous experiments point out that weakening of the synapses correlates with weakening of memory and learning abilities—but how exactly does it happen? If, for example, the synaptic strengths decrease on average by 5%, then will the time required to learn a particular navigation task increase by 1%, by 5% or by 50%? How would the changes in learning capacity depend on the original cognitive state? Can an increase in learning time, caused by a synaptic depletion, be compensated by increasing the population of active neurons or by elevating their spiking rates? Answering these questions requires a theoretical framework that connects the individual cell outputs and the large-scale cognitive phenomena that emerge at the ensemble level.

We propose a modeling approach that allows bridging the “semantic gap” between electrophysiological parameters of neuronal activity and the characteristics of spatial learning, using techniques from algebraic topology. Specifically, we study influence of synaptic transmission probability and the effects of synaptic plasticity on the hippocampal network’s ability to produce a topological cognitive map of the ambient space. We simulate deterioration of spatial learning capacity as a function of synaptic depletion in the hippocampal network to get a better insight into the spatial learning deficits (as observed, e.g., in Alzheimer’s disease) and understanding why development of these deficits may correlate with changes in the number of spiking neurons and/or of their firing rates, variations in the “brain wave” frequency spectra, etc. The results shed light on the principles of spatial learning in plastic networks and may help our understanding of neurodegenerative conditions.

P36 Robust spatial memories encoded by transient neuronal networks

Yuri Dabaghian

The University of Texas McGovern Medical School at Houston, Neurology, Houston, Texas, United States of America

Correspondence: Yuri Dabaghian (yuri.a.dabaghian@uth.tmc.edu)

BMC Neuroscience 2020, 21(Suppl 1):P36

The principal cells in mammalian hippocampus encode an internalized representation of the environment - the hippocampal cognitive map, that underlies spatial memory and spatial awareness. However, the synaptic architecture of the hippocampal network is dynamic: it contains a transient population of “cell assemblies”- functional units of the hippocampal computations - that emerge among the groups of coactive neurons and may disband due to reduction or cessation of spiking activity, then reappear, then disband again, etc. Electrophysiological studies in rats and mice suggest that the characteristic lifetimes of typical hippocampal cell assemblies range between minutes to tens of milliseconds. In contrast, cognitive representations sustained by the hippocampal network can last in rodents for months, which raises a principal question: how can a stable large-scale representation of space emerge from a rapidly rewiring neuronal stratum? We propose a computational approach to answering this question based on Algebraic Topology techniques and ideas. By simulating the place cell spiking activity during the rat’s exploratory movements through different environments and testing the stability of the resulting large-scale neuronal maps, we find that the networks with “flickering” architectures can reliably capture the topology of the ambient spaces. Moreover, the model suggests that the information is processed at three principal timescales, which roughly correspond to the short term, intermediate term and the long-term memories. The rapid rewiring of the local network connections occurs at the fastest timescale. The timescale at which the large-scale structures defining the shape of the cognitive map may fluctuate is by about an order of magnitude slower than the timescale of the information processing at the synaptic level. Lastly, an emerging stable topological base provides lasting, qualitative information about the environment, which remains robust despite the ongoing transience of the local connections.

P37 A realistic spatial model of the complete synaptic vesicle cycle

Andrew Gallimore, Iain Hepburn, Erik De Schutter

Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-son, Japan

Correspondence: Andrew Gallimore (gallimore@cantab.net)

BMC Neuroscience 2020, 21(Suppl 1):P37

The release of neurotransmitters from synaptic vesicles is the fundamental mechanism of information transmission between neurons in the brain. The entire synaptic vesicle cycle involves a highly complex interplay of proteins that direct vesicle docking at the active zone, the detection of intracellular calcium levels, fusion with the presynaptic membrane, and the subsequent retrieval of the vesicle protein material for recycling [1]. Despite its central importance in many aspects of neuronal function, and even though computational models of subcellular neuronal processes are becoming increasingly important in neuroscience research, realistic models of the synaptic vesicular cycle are almost non-existent. This is largely because the modeling tools for detailed spatial modeling of vesicles are not available.

Extending the STEPS simulator [2], we have pioneered spherical ‘vesicle’ objects that occupy a unique excluded volume and sweep a path through the tetrahedral mesh as they diffuse through the cytosol. Our vesicles incorporate endo- and exocytosis, fusion with and budding from intracellular membranes, neurotransmitter packing, as well as interactions between vesicular proteins and cytosolic and plasma membrane proteins. This allows us to model all key aspects of the synaptic vesicle cycle, including docking, priming, calcium detection and vesicle fusion, as well as dynamin-mediated vesicle retrieval and recycling.

Using quantitative measurements of protein copy numbers [3], membrane and cytosolic diffusion rates, protein-protein interactions, and an EM-derived spatial model of a hippocampal pyramidal neuron, we used this technology to construct the complete synaptic vesicle cycle at the Schaffer Collateral–CA1 synapse at an unprecedented level of spatially-realistic and biochemical detail (Fig. 1). We envisage that this new modeling technology will open up pioneering research into all aspects of neural function in which synaptic transmission plays a role.

Fig. 1
figure23

Model of a CA3-CA1 synaptic bouton with docked vesicles (blue), free vesicles (red), clustered vesicles (orange), and readily-retrievable pool (green)

References

  1. 1.

    Südhof TC. The molecular machinery of neurotransmitter release (Nobel lecture). Angewandte Chemie International Edition. 2014; 53(47): 12696-717.

  2. 2.

    Hepburn I, Chen W, Wils S, De Schutter E. STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies. BMC Systems Biology. 2012; 6: 36.

  3. 3.

    Wilhelm BG, et al. Composition of isolated synaptic boutons reveals the amounts of vesicle trafficking proteins. Science. 2014; 344(6187): 1023-8.

P38 Priors based on abstract rules modulate the encoding of pure tones in the subcortical auditory pathway

Alejandro Tabas1, Glad Mihai1, Stefan Kiebel2, Robert Trampel3, Katharina von Kriegstein4

1Max Planck Institute for Human Cognitive and Brain Sciences, Research Group in Neural Mechanisms of Human Communication, Leipzig, Germany; 2Technische Univerität Dresden, Chair of Neuroimage, Dresden, Germany; 3Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neurophysics, Leipzig, Germany; 4Technische Univerität Dresden, Chair of Cognitive and Clinical Neuroscience, Dresden, Germany

Correspondence: Alejandro Tabas (alextabas@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P38

Sensory pathways efficiently transmit information by adapting the neural responses to the local statistics of the sensory input. The predictive coding framework suggests that sensory neurons constantly match the incoming stimuli against an internal prediction derived from a generative model of the sensory input. Although predictive coding is generally accepted to underlay cortical sensory processing, the role of predictability in subcortical sensory coding is still unclear. Several studies have shown that single neurons and neuronal ensembles of the subcortical sensory pathway nuclei exhibit stimulus specific adaptation (SSA), a phenomenon where neurons adapt to frequently occurring stimuli (standards) yet show restored responses to a stimulus with deviating characteristics from the standard (deviant). Although neurons showing SSA are often interpreted as encoding prediction error, computational models to date have successfully explained SSA in terms of local network effects based on synaptic fatigue.

Here, we first introduce a novel experimental paradigm where abstract rules are used to manipulate predictability. 19 human participants listened to sequences of pure tones consisting on seven standards and one deviant while we recorded mesoscopic responses in auditory thalamus and auditory midbrain using 7-Tesla functional MRI. In each sequence, the deviant was constrained to occur once and only once, and always in locations 4, 5 or 6. Although the three locations were equiprobable at the beginning of the trial, the conditional probability of hearing a deviant in location n after hearing n-1 standards is 1/3, 1/2, and 1, for deviant locations 4, 5, and 6, respectively.

This paradigm yields different outcomes for habituation and predictive coding: if adaptation is driven by local habituation only, the three deviants should elicit similar neuronal responses; however, if it is predictive coding that entails adaptation, the neuronal responses to each deviant should depend on their abstract predictability. Our data showed that the responses to the deviants were strongly driven by abstract expectations, indicating that predictive coding is the main mechanism underlying mesoscopic SSA in the subcortical pathway. These results are robust even at the single-subject level.

Next, we developed a new model of pitch encoding for pure tones following the main directives of predictive coding. The model comprises two layers whose dynamics reflect two different levels of abstraction. The lower layer receives its inputs from the auditory nerve and makes use of the finite bandwidth of the peripheral filters to decode pitch fast and robustly. The second layer holds a sparse representation that integrates the activity in the first layer only once the pitch decision has been made. Top-down afferents from the upper layer reinforce the pitch decision and accept the inclusion of priors that facilitate the decoding of predictable tones.

Without the inclusion of priors, the model explains the key elements of SSA in animal recordings at the single-neuron level, as well as the main phenomenology of its mesoscopic representation. The inclusion of priors reflecting the abstract rules described in our paradigm facilitates the decoding of tones according to their predictability, effectively modulating the responses at the mesoscopic level. This modulation affects the mesoscopic fields generated during pitch encoding, fully explaining our experimental data.

Fig. 1
figure24

Top: z-scores of the BOLD responses to standards and deviants in the left auditory thalamus. Similar responses were recorded in the right auditory thalamus and bilateral auditory midbrain. Bottom: average model predictions under the habituation only (without informed priors) and predictive coding (with informed priors) scenarios

P39 Simulating cell to cell interactions during the cerebellar development

Mizuki Kato, Erik De Schutter

Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-son, Japan

Correspondence: Mizuki Kato (mizuki.kato@oist.jp)

BMC Neuroscience 2020, 21(Suppl 1):P39

The cerebellum is involved in both motor and non-motor functions in the brain. Any deficit during its development has been suggested to trigger ataxia as well as various psychiatric disorders.

During the development of both human and mouse cerebella, precursors of one of the main excitatory neurons, granule cells, first accumulate in the external granule layer on the surface and subsequently migrate down to the bottom of cerebellar cortex. In addition to the massive soma migration, these granule cell precursors also descend their axons through the migratory paths which further branch into parallel fibers, making the environment even more crowded. Although palisade-like Bergmann glia physically guide granule cells during the migration, mechanisms about how these two cell types interact to manage the migration through such a shambolic environment are still unclear.

Rodent cerebella have been widely used as subjects in experimental studies, and have provided great pictures of granule cells and Bergmann glia. However, technical limitations still hinder the observation of cerebellar development both as populations and in a continuous manner. Building a computational model by a reverse-engineering process which integrates available biological observations will be essential to point out differences in the developmental dynamics between normal and abnormal cerebellum.

Most computational models for simulating neuronal development have focused on intracellular factors of single cell types. Although models simulating limited environmental factors exist, models for cell-cell interactions during neuronal development are rare. Alternatively, we used new computational framework, NeuroDevSim, to simulate populations of granule cells and Bergmann glia during cerebellar development.

NeuroDevSim evolved from NeuroMaC [1] and so far is the only active software that can simultaneously simulate developmental dynamics of different types of neurons at population-scale.

The goal structure of simulation with NeuroDevSim comprises 3,000 granule cells and 200 Bergmann glia in a 1x106µm3 regular cube, calculated by assuming a cube of mice cerebellar cortex. 26 Purkinje cell somas are also introduced as interfering spherical objects. Their dendritic development will be included in the future. At current stage of the simulation, reduced systems are used, aiming to direct the traffic of granule cell somas and to navigate their axonal growth.

The resulted model will enable visualization of massive migration dynamics of cerebellar granule cells with growing parallel fibers and of their phenomenological interactions with Bergmann glia. This model will provide new insight to understand developmental dynamics of cerebellar cortex.

Reference

  1. 1.

    Torben-Nielsen B, De Schutter E. Context-aware modeling of neuronal morphologies. Frontiers in Neuroanatomy. 2014; 8: 92.

P40 Modeling multi-state molecules with a pythonic STEPS interface

Jules Lallouette, Erik De Schutter

Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-son, Japan

Correspondence: Jules Lallouette (jules.lallouette@oist.jp)

BMC Neuroscience 2020, 21(Suppl 1):P40

Molecules involved in biological signaling pathways can, in some cases, exist in a very high number of different functional states. Well-studied examples include the Ca2+/calmodulin dependent protein kinase II (CaMKII), or receptors from the ErbB family. Through a combination of binding sites, phosphorylation sites, and polymerization, these molecules can form complexes that reach exponentially increasing numbers of distinct states. This phenomenon of combinatorial explosion is a common obstacle when trying to establish detailed models of signaling pathways.

Classical approaches to the stochastic simulation of chemical reactions require the explicit characterization of all reacting species and all associated reactions. This approach is well suited to population based methods in which there are a relatively low number of different molecules that can be present at relatively high concentrations. Since each state of multi-state complexes would however have to be modeled as a distinct specie, the combinatorial explosion that we mentioned earlier makes these approaches inapplicable.

Two separate problems need to be tackled: the “specification problem” which requires a higher level of abstraction in the definition of complexes and reactions; and the “computation problem” that requires efficient methods to simulate the time evolution of reactions involving multi-state complexes. Rule based modeling (RBM) [1] tackles the former problem by allowing modelers to write “template” reactions that only contain the parts of the complexes actually involved in the reaction. Network-free methods together with particle-based methods [2] usually tackle the latter problem by only considering the states and reactions that are accessible from the current complex states and thus avoiding the computation of the full reaction network.

STEPS is a spatial stochastic reaction-diffusion simulation software that implements population based methods to simulate reaction-diffusion processes on realistic tetrahedral meshes [3]. In an effort to tackle the “specification problem” in STEPS, we present in this poster a novel, more pythonic, interface to STEPS that allows intuitive declaration of both classical and multi-state complexes reactions. Significant emphasis was put on simplifying model declaration, data access, and data saving during simulations.

To specifically tackle the “computation problem” in STEPS, we present a hybrid population/particle based method to simulate reactions involving multi-state complexes. By preventing the formation of arbitrarily structured macromolecules, we lower the computational cost of the pattern matching step necessary to identify potential reactants [4]. This diminished computational cost allows us to simulate larger spatial systems. We discuss these performance improvements and present examples of stochastic spatial simulations involving 12 subunits CamKII complexes which would have previously been intractable in STEPS.

References

  1. 1.

    Chylek LA, Stites EC, Posner RG, Hlavacek WS. Innovations of the rule-based modeling approach. Systems Biology. 2013; 273-300.

  2. 2.

    Tapia JJ, et al. Mcell-r: A particle-resolution network-free spatial modeling framework. Modeling Biomolecular Site Dynamics. 2019; 203-229.

  3. 3.

    Hepburn I, Chen W, Wils S, De Schutter E. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies. BMC Systems Biology. 2012; 6(1): 1-9.

  4. 4.

    Blinov ML, Yang J, Faeder JR, Hlavacek WS. Graph theory for rule-based modeling of biochemical networks. Transactions on Computational Systems Biology VII. 2006; 89-106.

P41 Reaction-diffusion simulations of astrocytic Ca2+ signals in realistic geometries

Audrey Denizot1, Corrado Calì2, Weiliang Chen1, Iain Hepburn1, Hugues Berry3, Erik De Schutter1

1Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-son, Japan; 2Neuroscience Institute Cavalieri Ottolenghi, University of Turin, Department of Neuroscience, Turin, Italy; 3INRIA, University of Lyon, LIRIS, UMR5205 CNRS, Villeurbanne, France

Correspondence: Audrey Denizot (audrey.denizot2@oist.jp)

BMC Neuroscience 2020, 21(Suppl 1):P41

Astrocytes, glial cells of the central nervous system, display a striking diversity of Ca2+ signals in response to neuronal activity. 80% of those signals take place in cellular ramifications that are too fine to be resolved by conventional light microscopy [1], often in apposition to synapses (perisynaptic astrocytic processes, PAPs). Understanding Ca2+ signaling in PAPs, where astrocytes potentially regulate neuronal information processing [2], is crucial. At this spatial scale, Ca2+ signals are not distributed uniformly, being preferentially located in so-called Ca2+ hotspots [3], suggesting the existence of subcellular spatial domains. However, because of the spatial scale at stake, little is currently known about the mechanisms that regulate Ca2+ signaling in fine processes. Here, we investigate the geometry of the endosplamic reticulum (ER), the predominant astrocytic Ca2+ store, using electron microscopy. Contrary to previous reports [4], we detect ER in PAPs, which can be as close as ~60nm to the closest postsynaptic density. We use computational modeling to investigate the impact of the observed cellular and ER geometries on Ca2+ signaling. Simulations using the stochastic voxel-based model from Denizot et al [5], both in simplified and in realistic 3D geometries, reproduce spontaneous astrocytic microdomain Ca2+ transients measured experimentally. In our simulations, the effect of the clustering of IP3R channels observed in 2 spatial dimensions [5] is still valid in a simple cylinder geometry but no longer holds in complex realistic geometries. We propose that those discrepancies might result from the geometry of the ER and that, in 3 spatial dimensions, the effects of molecular distributions (such as IP3R clustering) are particularly enhanced at ER-plasma membrane contact sites. Our results suggest that the predictions from simulations in 1D, 2D or simplified 3D geometries should be cautiously interpreted. Overall, this work provides a better understanding of IP3R-dependent Ca2+ signals in fine astrocytic processes and more generally in subcellular compartments, a prerequisite for understanding the dynamics of Ca2+ hotspots, which are deemed essential for local intercellular communication.

References

  1. 1.

    Bindocci E, et al. Three-dimensional Ca2+ imaging advances understanding of astrocyte biology. Science. 2017; 356(6339).

  2. 2.

    Savtchouk I, Volterra A. Gliotransmission: Beyond Black-and-White. Journal of Neuroscience. 2018; 38(1): 14–25.

  3. 3.

    Thillaiappan NB, Chavda AP, Tovey SC, Prole DL, Taylor CW. Ca 2+ signals initiate at immobile IP 3 receptors adjacent to ER-plasma membrane junctions. Nature Communications. 2017; 8(1): 1-6.

  4. 4.

    Patrushev I, Gavrilov N, Turlapov V, Semyanov A. Subcellular location of astrocytic calcium stores favors extrasynaptic neuron–astrocyte communication. Cell calcium. 2013; 54(5): 343-9.

  5. 5.

    Denizot A, Arizono M, Nägerl UV, Soula H, Berry H. Simulation of calcium signaling in fine astrocytic processes: Effect of spatial properties on spontaneous activity. PLoS Computational Biology. 2019; 15(8): e1006795.

P42 A nanometer range reconstruction of the Purkinje cell dendritic tree for computational models

Mykola Medvidov, Weiliang Chen, Christin Puthur, Erik De Schutter

Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-son, Japan

Correspondence: Mykola Medvidov (mykola.medvidov@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P42

Purkinje neurons are used extensively in computational neuroscience [1]. However, despite extended knowledge about Purkinje cell morphology and ultrastructure, the complete dendritic tree of Purkinje cell as well as the complete dendritic tree of other types of neurons was never reconstructed at nanometer range resolution due to the cells size and complexity. At the same time, the use of real Purkinje cell dendritic tree morphology may be very important for computational models. Considering the development of new instruments and imaging techniques that nowadays allow reconstruction of large volumes of the neuronal tissue, the main goal of our project is to reconstruct a dendritic tree of a Purkinje cell with all its dendritic spines and synapses.

Serial Block Face Microscope (SBF) is widely used to examine large volume of neuronal tissue with nanometer range resolution [2]. To obtain volume data, perfused mouse brains were processed for SBF imaging using OTO staining techniques and the best quality cerebellum slice was imaged on FEI Teneo VS Electron Microscope with pixel resolution 8x8x60 nm. An imaged volume of approximately 2.2 Terapixel was processed and aligned with Image J and Adobe Photoshop. To reconstruct the Purkinje cell dendritic tree the imaged volume was first analyzed to locate the most appropriate full cell inside the imaged volume. Second, the volume containing the cell was segmented with Ilastik [https://www.ilastik.org] and Tensor Flow deep learning network [https://github.com/tensorflow]. The super-pixels were fused with custom made software to generate a dendritic tree represented by 3d voxels. Next, a 3d surface mesh was generated based on 3d voxels array using the marching cubes algorithm [https://github.com/ilastik/marching_cubes] and the resulting mesh was processed with MeshLab to generate a final surface mesh. Finally, a tetrahedral volume mesh was generated with the TetWild software [https://github.com/Yixin-Hu/TetWild]. The resulting tetrahedral mesh of Purkinje cell full dendritic tree including cell body and initial axonal segment will be used to run large scale stochastic models using the parallel STochastic Engine for Pathway Simulation [3] (STEPS) [http://steps.sourceforge.net].

References

  1. 1.

    Zang Y, Dieudonne S, De Schutter E. Voltage- and Branch-Specific Climbing Fiber Responses in Purkinje Cells. Cell Reports. 2018; 24, 1536–1549.

  2. 2.

    Titze B, Genoud C. Volume scanning electron microscopy for imaging biological ultrastructure. Biology of the Cell 2016;108, 307-323.

  3. 3.

    Chen W, De Schutter E. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers. Frontiers in Neuroinformatics 2017; 11, 1-15.

P43 Responses of a Purkinje cell model to inhibition and excitation

Gabriela C Rangel, Erik De Schutter

Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-son, Japan

Correspondence: Gabriela C Rangel (gabriela-capo@oist.jp)

BMC Neuroscience 2020, 21(Suppl 1):P43

Although the effects of inhibition on Purkinje cells have first been observed over five decades ago and have since then been intensively studied, the manner in which the cerebellar output is regulated by both inhibitory and excitatory cells has yet to be fully understood. Purkinje cells represent the sole output of the cerebellar cortex and are known to fire simple spikes as a result of the integrated excitatory and inhibitory synaptic input originating from parallel fibers and the interneurons in the molecular layer. When studied in vivo, both Purkinje cells and interneurons exhibit a highly irregular pattern in the firing of action potentials. The mechanisms underlying the complex interaction between the intrinsic properties of the membrane and the pattern of synaptic inputs that generate the cerebellar output have not yet been completely understood. Recent literature has underlined the importance of the inhibitory interneurons (stellate and basket cells) in shaping the simple spikes of Purkinje cells. Moreover, when inhibitory interneurons are eliminated and only asynchronous excitation is taken into account, numerous computational [1] and experimental work have reported unrealistic behavior such as very little variability between the spiking intervals, as well as very small minimum firing frequencies. The modeling approach we propose here focuses on analyzing the effects that combined inhibition and excitation have on the shape of action potential, on the firing frequency and on the time intervals in between the simple spikes. The starting point of our work was a very detailed Purkinje cell model proposed by Zang et al in [2]. Instead of varying somatic holding currents as in previous work, in here, the dendritic voltage states are determined by the balance between the frequency of inhibitory cells and the frequency of parallel fibers. Our preliminary results indicate that inhibition presents both subtractive and divisive behavior, depending on stellate cells frequency. We discuss in detail the different shapes of firing we obtained. In particular, our results capture not only simple spikes but also a trimodal firing pattern, previously observed experimentally in [3]. This trimodal firing pattern is a characteristic of mature Purkinje cells and is given by a mixture of three different phases: tonic firing, bursting and silent mode. We mapped the regions in which simple spiking occur and the regions in which simple spikes appear and we further investigate the role of the SK2 channels in eliminating or prolonging the trimodal pattern.

References

  1. 1.

    De Schutter ER, Bower JM. An active membrane model of the cerebellar Purkinje cell II. Simulation of synaptic responses. Journal of neurophysiology. 1994; 71(1): 401-19.

  2. 2.

    Zang Y, Dieudonné S, De Schutter E. Voltage-and branch-specific climbing fiber responses in Purkinje cells. Cell reports. 2018; 24(6): 1536-49.

  3. 3.

    Womack MD, Khodakhah K. Somatic and dendritic small-conductance calcium-activated potassium channels regulate the output of cerebellar Purkinje neurons. Journal of Neuroscience. 2003; 23(7): 2600-7.

P44 3D modeling of Purkinje cell activity

Alexey Martyushev, Erik De Schutter

Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-son, Japan

Correspondence: Alexey Martyushev (martyushev.alexey@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P32

The NEURON software remains the main neural physiology modeling tool for scientists. Its computational methods benefit from deterministic approximations of the cable equation solutions and 1-dimensional radial calcium diffusion in cylindrical neuron morphologies [1]. However, in real neurons ions diffuse in 3-dimensional volumes [2] and membrane channels get activated in a stochastic manner. Furthermore, NEURON does not suit to model nano-sized spine morphology. In contrast, the Stochastic Engine for Pathway Simulation (STEPS) uses fully stochastic 3-dimensional methods in tetrahedral morphologies that can provide realistic modeling of neurons at the nanoscale [3, 4].

In this work, we compare the modeling results between those two environments for the Purkinje cell model developed by Zang et al. [5]. This model considers a variety of calcium, potassium and sodium channels, and the resulting calcium concentrations affecting the membrane potential of a Purkinje cell. The results demonstrate that: (i) the used cylinder light microscopy morphology can not be identically transformed into a 3D mesh; (ii) the effect of stochastic channel activation determines the timing of membrane potential spikes; (iii) the kinetics of calcium activated potassium channels strongly depends on the specified sub-membrane volumes in both environments.

A further step in developing the model will be integration of a digital microscopy reconstruction of spines to the existing 3D tetrahedral mesh.

References

  1. 1.

    Carnevale NT, Hines ML. The NEURON book. Cambridge University Press; 2006 Jan 12.

  2. 2.

    Anwar H, Roome CJ, Nedelescu H, Chen W, Kuhn B, De Schutter E. Dendritic diameters affect the spatial variability of intracellular calcium dynamics in computer models. Frontiers in cellular neuroscience. 2014; 8: 168.

  3. 3.

    Hepburn I, Chen W, Wils S, De Schutter E. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies. BMC systems biology. 2012; 6(1): 1-9.

  4. 4.

    Chen W, De Schutter E. Time to bring single neuron modeling into 3D. 2017.

  5. 5.

    Zang Y, Dieudonné S, De Schutter E. Voltage-and branch-specific climbing fiber responses in Purkinje cells. Cell reports. 2018; 24(6): 1536-49.

P45 Exploring relevant spatiotemporal scales for analyses of brain dynamics

Xenia Kobeleva1, Ane López-González2, Morten L Kringelbach3, Gustavo Deco4

1University of Bonn, Department of Neurology, Bonn, Germany; 2Universitat Pompeu Fabra, Computational Neuroscience Group, Barcelona, Spain; 3University of Oxford, Department of Psychiatry, Oxford, United Kingdom; 4Universitat Pompeu Fabra, Barcelona, Spain

Correspondence: Xenia Kobeleva (xkobeleva@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P45

Introduction: The brain switches between cognitive states at a high speed by rearranging interactions between distant brain regions. Using analyses of brain dynamics neuroimaging researchers were able to further describe this dynamical brain-behavior relationship. However, the diversity of methodological choices for the brain dynamics analyses impedes comparisons between studies of brain dynamics, reducing their reproducibility and generalizability. A key choice constitutes deciding on the spatiotemporal scale of the analysis, which includes both the number of regions (spatial scale) as well as the sampling rate (temporal scale). Choosing a suboptimal scale might either lead to loss of information or inefficient analyses with increase of noise. Therefore, the aim of this study was to assess the effect of different spatiotemporal scales on analyses of brain dynamics and to determine which spatiotemporal scale would retrieve the most relevant information on dynamic spatiotemporal patterns of brain regions.

Methods: We compared the effect of different spatiotemporal scales on the information content of the evolution of spatiotemporal patterns using empirical as well as simulated timeseries. Empirical timeseries were extracted from the Human Connectome Project [1]. We then created a whole-brain mean-field model of neural activity [2] resembling the key properties of the empirical data by fitting the global synchronization level and measures of dynamical functional connectivity. This resulted in different spatiotemporal with spatial scales from 100 to 900 regions and varying temporal scales from milliseconds to seconds. With a variation of an eigenvalue analysis [3], we estimated the number of spatiotemporal patterns over time and then extracted these patterns with an independent component analysis. The evolution of these patterns was then compared between scales in regard to the richness of switching activity (corrected for the number of patterns in total) using the measure of entropy. Given the probability of the occurrence of a pattern over time, we defined the entropy as a function of the probability of patterns.

Results: Using the entropy measure, we were able to specify both optimal and temporal scales for the evolution of spatiotemporal patterns. The entropy followed an inverted U-shaped function with the highest value at an intermediate parcellation of n = 300. The entropy was highest at a temporal scale of around 200ms.

Conclusions and discussion: We have investigated which spatiotemporal scale contained the highest information content for brain dynamics analyses. By combining whole-brain computational modelling with an estimation of the number of resulting patterns, we were able to analyze whole-brain dynamics in different spatial and temporal scales. From a probabilistic perspective, we explored the entropy of the probability of resulting brain patterns, which was highest at a parcellation of n = 300. Our results indicate that although more spatiotemporal patterns with increased heterogeneity are found with higher parcellations, the most relevant information on brain dynamics is captured when using a spatial scale of n = 200 and a temporal scale of 200ms. Our results therefore provide guidance for researchers on choosing the optimal spatiotemporal scale in studies of brain dynamics.

References

  1. 1.

    1.Van Essen DC, Smith SM, Barch DM, Behrens TE, Yacoub E, Ugurbil K, Wu-Minn HCP Consortium. The WU-Minn human connectome project: an overview. Neuroimage. 2013; 80: 62-79.

  2. 2.

    Deco G, et al. Resting-state functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations. Journal of Neuroscience. 2013; 33(27): 11239-52.

  3. 3.

    Deco G, Cruzat J, Kringelbach ML. Brain songs framework used for discovering the relevant timescale of the human brain. Nature Communications. 2019; 10(1): 1-3.

P46 Faster gradient descent learning

Ho Ling Li1, Mark van Rossum2

1University of Nottingham, School of Psychology, Nottingham, United Kingdom; 2University of Nottingham, School of Psychology and School of Mathematical Sciences, Nottingham, United Kingdom

Correspondence: Ho Ling Li (holing.li@nottingham.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P46

Back-propagation is a popular machine learning algorithm that uses gradient descent in training neural networks for supervised learning. In stochastic gradient descent a cost function C is minimized by adjusting the weights wij as Δwij= -η(∂C/∂wij) at every training sample. However, learning with back-propagation can be very slow. A number of algorithms have been developed to speed up convergence and improve robustness of the learning. One way is to start with a high learning rate and anneal it to lower values at the end of learning. Other approaches combine past updates with the current weight update, such as momentum [1] and Adam [2]. These algorithms are now standard in most machine learning studies, but are complicated to implement biologically.

Inspired by synaptic competition in biology, we have come up with a simple and local gradient descent optimization algorithm that can reduce training time, with no demand on past information. Our algorithm works similarly to the traditional gradient descent used in back-propagation, except that instead of having a uniform learning rate across all synapses, the learning rate depends on the current connection weights of individual synapses and the L2norm of the weights of each neuron.

Our algorithm encourages neurons to form strong connections to a handful of neurons of their neighbouring layers by assigning higher learning rateηijto synapses with bigger weights wij: Δwij= -η0(|wij|+α)/(||wj||+α)(∂C/∂wij), where i represents the indices of the post-synaptic neurons and j represents the indices of the pre-synaptic neurons. The parameter α is set at the range of values such that at the beginning of training α > ||wj|| wij so that all synapses have learning rate close to η0. As learning progresses, the learning rate of large synapses stays close to η0, while the learning rate of small synapses decreases. Here, ||wj|| is summing over all the post-synaptic weights of a pre-synaptic neuron, leading to each pre-synaptic neuron having strong connections to a limited amount of post-synaptic neurons only. However, our algorithm also works by replacing this term with ||wi||, which promotes every post-synaptic neuron to form strong connections to small number of pre-synaptic neurons instead. We note that the proposed modulation of learning can easily be imagined to occur in biology, as it only requires post-synaptic factors and requires no memory.

We have tested our algorithm with back-propagation networks with one hidden layer consisting of 100 units to classify the MNIST handwritten digit dataset with 96% accuracy. Compared to networks equipped with the best constant learning rate, networks train 24% faster with our algorithm. The improvement is even greater with smaller networks: with 50 units in the hidden layer, our algorithm shortens the training time by 40% with respect to the best constant learning rate. Preliminary results also show that our algorithm is comparable to Adam for the small networks that we have tested. Thus, our algorithm has shown the possibility of a local and biological gradient descent optimization algorithm that only requires online information.

References

  1. 1.

    Plaut D, Nowlan S, Hinton G. Experiments on Learning by Back Propagation. Technical Report CMU-CS-86-126. 1986.

  2. 2.

    Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint: 1412.6980. 2014.

P47 Brain dynamics and structure-function relationships via spectral factorization and the transfer function

James Henderson1, Peter Robinson1, Mukesh Dhamala2

1The University of Sydney, School of Physics, Sydney, Australia; 2Georgia State University, Department of Physics and Astronomy, Atlanta, Georgia, United States of America

Correspondence: James Henderson (james.henderson@sydney.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P47

The relationships between brain activity and structure are of central importance to understanding how the brain carries out its functions and to interrelating and predicting different kinds of experimental measurements. The aim of this work is to first describe the transfer function and its relationships to many existing forms of brain analysis, and then to describe methods for obtaining the transfer function, with emphasis on spectral factorization using the Wilson algorithm [1,2] applied to correlations of time series measurements.

The transfer function of a system contains complete information about its linear properties, responses, and dynamics. This includes relationships to impulse responses, spectra, and correlations. In the case of brain dynamics, it has been shown that the transfer function is closely related to brain connectivity, including time delays, and we note that linear coupling is widely used to model the spatial interactions of locally nonlinear dynamics.

It is shown how the brain’s linear transfer function provides a means of systematically analyzing brain connectivity and dynamics, providing a robust way of inferring connectivity, and activity measures such as spectra, evoked responses, coherence and causality, all of which are widely used in brain monitoring. Additionally, the eigenfunctions of the transfer function are natural modes of the system dynamics and thus underlie spatial patterns of excitation in the cortex. Thus, the transfer function is a suitable object for describing and analyzing the structure-function relationship in brains.

The Wilson spectral factorization algorithm is outlined and used to efficiently obtain linear transfer functions from experimental two-point correlation functions. Criteria for time series measurements are described for the algorithm to accurately reconstruct the transfer function, including comparing the algorithm’s theoretical computational complexity with empirical runtimes for systems of similar size to current experiments. The algorithm is applied to a series of examples of increasing complexity and similarity to real brain structure in order to test and verify that it is free of numerical errors and instabilities (and modifying the method where required to ensure this). The results of applying the algorithm to a 1D test case with asymmetry and time delays is shown in (Fig. 1). The method is tested on increasingly realistic structures using neural field theory, introducing time delays, asymmetry, dimensionality, and complex network connectivity to verify the algorithm’s suitability for use on experimental data.

Fig. 1
figure25

Reconstruction of the transfer function and propagator using the Wilson algorithm from a correlation matrix for a 1D test case with asymmetry and time delays. a The original (black lines) and reconstructed (colored lines) transfer functions as a function of space, for four time delays. b As for a, but for the system propagator. c The real part of the original (black line) and re constructed (red line) transfer function as a function of frequency. d As for c, but for the system propagator. e The imaginary part of the original (black line) and reconstructed (red line) transfer function as a function of frequency. f As for (e), but for the system propagator

Acknowledgements: This work was supported by the Australian Research Council under Center of Excellence grant CE140100007 and Laureate Fellowship grant FL140100025.

References

  1. 1.

    Dhamala M, Rangarajan G, Ding M. Estimating Granger causality from Fourier and wavelet transforms of time series data. Physical Review Letters. 2008; 100(1): 018701.

  2. 2.

    Wilson, GT. The Factorization of Matrical Spectral Densities. SIAM Journal on Applied Mathematics. 1972; 23: 420.

P48 Large-scale spiking network models of primate cortex as research platforms

Sacha J van Albada1,2, Aitor Morales-Gregorio1,3, Alexander van Meegen1,3, Jari Pronold1,3, Agnes Korcsak-Gorzo1,3, Hannah Vollenbröker1,4, Rembrandt Bakker1,5, Stine B Vennemo6, Håkon B Mørk6, Jasper Albers1,3, Hans E Plesser1,6, Markus Diesmann1,7,8

1Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich, Germany; 2Institute of Zoology, University of Cologne, Cologne, Germany; 3RWTH Aachen University, Aachen, Germany; 4Heinrich Heine University Düsseldorf, Düsseldorf, Germany; 5Radboud University, Donders Institute for Brain, Cognition and Behavior, Nijmegen, Netherlands; 6Norwegian University of Life Sciences, Faculty of Science and Technology, Ås, Norway; 7RWTH Aachen University, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, Aachen, Germany; 8RWTH Aachen University, Department of Physics, Faculty 1, Aachen, Germany

Correspondence: Sacha J van Albada (s.van.albada@fz-juelich.de)

BMC Neuroscience 2020, 21(Suppl 1):P48

Despite the wide variety of available models of the cerebral cortex, a unified understanding of cortical structure, dynamics, and function at different scales is still missing. Key to progress in this endeavor will be to bring together the different accounts into unified models. We aim to provide a stepping stone in this direction by developing large-scale spiking neuronal network models of primate cortex that reproduce a combination of microscopic and macroscopic findings on cortical structure and dynamics. A first model describes resting-state activity in all vision-related areas in one hemisphere of macaque cortex [1,2], representing each of the 32 areas with a 1 mm2 microcircuit [3] with the full density of neurons and synapses. Comprising about 4 million leaky integrate-and-fire neurons and 24 billion synapses, it is simulated on the Jülich supercomputers. The model has recently been ported to NEST 3, greatly reducing the construction time. The inter-area connectivity is based on axonal tracing [4] and predictive connectomics [5]. Findings reproduced include the spectrum and rate distribution of V1 spiking activity [6], feedback propagation of activity across the visual hierarchy [7], and a pattern of functional connectivity between areas as measured with fMRI [8]. The model is available open-source [https://inm-6.github.io/multi-area-model/] and uses the tool Snakemake [9] for formalizing the workflow from the experimental data to simulation, analysis, and visualization. It serves as a platform for further developments, including an extension with motor areas [10] for studying visuo-motor interactions, incorporating function using a learning-to-learn framework [11], and creating an analogous model of human cortex [12]. It is our hope that this work will contribute to an increasingly unified understanding of cortical structure, dynamics, and function.

Acknowledgments: EU Grants 269921 (BrainScaleS), 604102 (HBP SGA1), 785907 (HBP SGA2), HBP SGA ICEI 800858; VSR computation time grant JINB33; DFG SPP 2041.

References

  1. 1.

    Schmidt M, Bakker R, Hilgetag CC, Diesmann M, van Albada SJ. Multi-scale account of the network structure of macaque visual cortex. Brain Structure and Function. 2018; 223(3): 1409-35.

  2. 2.

    Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Computational Biology. 2018; 14(10): e1006359.

  3. 3.

    Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral cortex. 2014; 24(3): 785-806.

  4. 4.

    Bakker R, Wachtler T, Diesmann M. CoCoMac 2.0 and the future of tract-tracing databases. Frontiers in Neuroinformatics. 2012; 6: 30.

  5. 5.

    Hilgetag CC, Beul SF, van Albada SJ, Goulas A. An architectonic type principle integrates macroscopic cortico-cortical connections with intrinsic cortical circuits of the primate brain. Network Neuroscience. 2019; 3(4): 905-23.

  6. 6.

    Chu CC, Chien PF, Hung CP. Tuning dissimilarity explains short distance decline of spontaneous spike correlation in macaque V1. Vision Research. 2014; 96: 113-32.

  7. 7.

    Nir Y, et al. Regional slow waves and spindles in human sleep. Neuron. 2011; 70(1): 153-69.

  8. 8.

    Babapoor-Farrokhran S, Hutchison RM, Gati JS, Menon RS, Everling S. Functional connectivity patterns of medial and lateral macaque frontal eye fields reveal distinct visuomotor networks. Journal of Neurophysiology. 2013; 109(10): 2560-70.

  9. 9.

    Köster J, Rahmann S. Snakemake—a scalable bioinformatics workflow engine. Bioinformatics. 2012; 28(19): 2520-2.

  10. 10.

    Morales-Gregorio A, et al. Bernstein Conference. 2019; https://abstracts.g-node.org/conference/BC19/abstracts#/uuid/0e114351-af44-4cc8-b457-acb41f9064e6

  11. 11.

    Korcsak-Gorzo A, et al. Bernstein Conference. 2019; https://abstracts.g-node.org/conference/BC19/abstracts#/uuid/e28680aa-eb4c-448e-b2a8-2e0b32c2ff49

  12. 12.

    Pronold J, Bakker R, Morales-Gregorio A, van Albada S, van Meegen A. Multi-area spiking network models of macaque and human cortices. In NEST Conference 2019 (No. FZJ-2019-04472). Computational and Systems Neuroscience.

P49 Electro-diffusion of ions in dendritic signal transduction

Yinyun Li1, Alexander Dimitrov2

1Beijing Normal University, Department of Management, Beijing, China; 2Washington State University Vancouver, Vancouver, Washington, United States of America

Correspondence: Yinyun Li (leeyinyun@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P49

Electrical dynamics of cellular membranes is central to our understanding of information processing in neurons. Until recently, the physics of ionic dynamics was largely ignored [1, 2]. Indeed, ionic concentrations are usually not significantly altered by the membrane conductance. However, their effects may be sizeable when the intracellular volume is relatively small [3, 4]. More importantly, a sudden change in concentration at one location may lead to gradients of ionic concentrations within a neural process. In our work, we demonstrate some realistic neural processes in which this effect is significant. The Nernst-Planck equation of electro-diffusion was applied to a dendrite, and voltage-gated potassium and sodium channels are added into the system. The difference of dynamics of ions and membrane voltage between the condition with electro-diffusion and without electro-diffusion were collected and compared. We found the voltage is the main driving force for the membranous ion fluxes, and the feed back loop from ion concentrations to the membrane voltage may dramatically change the dynamics of the membrane voltage. When voltage-gated calcium influx and electro-diffusion was added into the system, the dynamics of potassium becomes dramatically different from the case without electro-diffusion. We conclude that the electro-diffusion of ions in a small volume may significantly change neural information processing in a non-linear effect.

Acknowledgements: This work is supported by the China Scholarship Council.

References

  1. 1.

    Savtchenko LP, Poo MM, Rusakov DA. Electrodiffusion phenomena in neuroscience: a neglected companion. Nature Reviews Neuroscience. 2017; 18(10): 598.

  2. 2.

    Solbrå A, et al. A Kirchhoff-Nernst-Planck framework for modeling large scale extracellular electrodiffusion surrounding morphologically detailed neurons. PLoS Computational Biology. 2018; 14(10): e1006510.

  3. 3.

    Qian N, Sejnowski TJ. An electro-diffusion model for computing membrane potentials and ionic concentrations in branching dendrites, spines and axons. Biological Cybernetics. 1989; 62(1): 1-5.

  4. 4.

    Lopreore CL, et al. Computational modeling of three-dimensional electrodiffusion in biological systems: application to the node of Ranvier. Biophysical Journal. 2008; 95(6): 2624-35.

P50 Spatial denoising through topographic modularity in networks of spiking neurons

Barna Zajzon1, Abigail Morrison2, Renato Duarte3

1Jülich Research Center, Institute of Neuroscience and Medicine (INM-6), Jülich, Germany; 2Forschungszentrum Jülich GmbH, Jülich, Germany; 3Jülich Research Center, Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich, Germany

Correspondence: Barna Zajzon (b.zajzon@fz-juelich.de)

BMC Neuroscience 2020, 21(Suppl 1):P50

Topographic maps are a pervasive structural feature of the mammalian brain, present throughout the cortical hierarchy and particularly prevalent in the early sensory systems. These ordered projections arrange and preserve the relative organization of cells between distinct populations and have been the object of many empirical studies. From providing a structural scaffold for spatial information segregation to the organization of spatiotemporal feature maps, these ubiquitous anatomical features are known to have significant, albeit not entirely understood, functional consequences.

In this work, we systematically investigate the functional and dynamical impact of the characteristics of modular propagation pathways in large networks of spiking neurons. Specifically, we manipulate key structural parameters such as modularity, map size and degree of overlap, and evaluate their impact on the network dynamics and computational performance during a continuous signal reconstruction task from noisy inputs. We show that transmission accuracy increases as topographic projections become more structured, with even moderate degrees of modularity improving overall discrimination capability.

Moreover, we identify a condition where the global population statistics converges towards a stable asynchronous irregular regime, allowing for a linear firing rate propagation along the topographic maps. In such conditions, the networks exhibit spatial denoising properties and the task performance improves vastly with hierarchical depth. Importantly, the relative performance gain throughout the hierarchy increases with the amount of noise in the input. This suggests that topographic modularity is not only essential for accurate neural communication, but it can also provide the structural underpinnings to handle noisy and corrupt information streams.

Using field-theoretic approximations, we demonstrate that this phenomenon can be attributed to a disruption in the E-I balance throughout the network. By changing the effective connectivity within the system, strongly modular projections facilitate the emergence of inhibition-dominated regimes where population responses along active maps are amplified, whereas others are weakened or silenced.

In addition, we analytically derive constraints on the possible extent of such maps, given that in cortical networks topographic specificity is assumed to decrease with hierarchical depth. Our findings suggest that, while task performance is relatively robust to variation in the map sizes, there is a fine balance between the spatial extent of the topographic projections and their modularity. Maps may not become arbitrarily small and must compensate for the size through denser topographic connections, whereas this can actually have a detrimental effect in the case of larger maps if they overlap.

Taken together, these results highlight the functional benefits of structured connectivity in hierarchical neural networks, and shed light on a potential new role for modular topographic maps as a denoising mechanism.

Acknowledgements: The authors gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA at Forschungszentrum Jülich.

P51 Data-driven multiscale modeling with NetPyNE: new features and example models

Joe Graham1, Matteo Cantarelli2, Filippo Ledda2, Dario Del Piano2, Facundo Rodriguez2, Padraig Gleeson3, Samuel A Neymotin4, Michael Hines5, William W Lytton6, Salvador Dura-Bernal6

1SUNY Downstate Medical Center, Neurosim Lab, Brooklyn, New York, United States of America; 2MetaCell, LLC, Cambridge, Massachusetts, United States of America; 3University College London, Department of Neuroscience, Physiology & Pharmacology, London, United Kingdom; 4Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America; 5Yale University, School of Medicine, New Haven, Connecticut, United States of America; 6SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, New York, United States of America

Correspondence: Joe Graham (joe.w.graham@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P51

Neuroscience experiments generate vast amounts of data that span multiple scales: from interactions between individual molecules, to behavior of cells, to circuit activity, to waves of activity across the brain. Biophysically-realistic computational modeling provides a tool to integrate and organize experimental data at multiple scales. NEURON is a leading simulator for detailed neurons and neuronal networks. However, building and simulating networks in NEURON is technically challenging, requiring users to implement custom code for many tasks. Also, lack of format standardization makes it difficult to understand, reproduce, and reuse many existing models.

NetPyNE is a Python interface to NEURON which addresses these issues. It features a user-friendly, high-level declarative programming language. At the network level for example, NetPyNE automatically generates connectivity using a concise set of user-defined specifications rather than forcing the user to explicitly define millions of cell-to-cell connections. NetPyNE enables users to generate NEURON models, run them efficiently in automatically parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for a wide variety of visualizations and analyses. NetPyNE facilitates sharing by exporting and importing standardized formats (NeuroML and SONATA), and is being widely used to investigate different brain phenomena. It is also being used to teach basic neurobiology and neural modeling. NetPyNE has recently added support for CoreNEURON, the compute engine of NEURON optimized for the latest supercomputer hardware architectures.

In order to make NetPyNE accessible to a wider range of researchers and students, including those with limited programming experience, and to encourage further collaboration between experimentalists and modelers, all its functionality is accessible via a state-of-the-art graphical user interface (GUI). From a browser window, users can intuitively define their network models, visualize and manipulate their cells and networks in 3D, run simulations, and visualize data and analyses. The GUI includes an interactive Python console which synchronizes with the underlying Python-based model.

The NetPyNE GUI is currently being improved in several ways. Flex Layout is being introduced to ensure a responsive, customizable GUI layout regardless of screen size or orientation. Redux is being added to the stack to ensure the complete state of the app is known at all times, minimizing bugs and improving performance. Bokeh is being used to create interactive plots. Furthermore, by integrating NetPyNE with Open Source Brain, users will be able to create online accounts to manage different workspaces and models (create, save, share, etc.). This will allow interaction with online repositories to pull data and models into NetPyNE projects, from resources such as ModelDB, NeuroMorpho, GitHub, etc.

In this poster, we present the latest improvements in NetPyNE and discuss recent data-driven multiscale models utilizing NetPyNE for different brain regions, including: primary motor cortex, primary auditory cortex, and a canonical neocortex model underlying the Human Neocortical Neurosolver, a software tool for interpreting the origin of MEG/EEG data.

Acknowledgments: Supported by NIH U24EB028998, U01EB017695, DOH01-C32250GG-3450000, R01EB022903, R01MH086638, R01DC012947, and ARO W911NF1910402.

P52 Connectivity modulation by dual-site transcranial alternating current stimulation based on spike-timing dependent plasticity

Bettina Schwab1, Peter König2, Andreas K Engel1

1University Medical Center Hamburg-Eppendorf, Department of Neurophysiology and Pathophysiology, Hamburg, Germany; 2University of Osnabrück, Institute of Cognitive Science, Osnabrück, Germany

Correspondence: Bettina Schwab (b.schwab@uke.de)

BMC Neuroscience 2020, 21(Suppl 1):P52

Transcranial alternating current stimulation (tACS) noninvasively applies electric fields to the brain with the aim of entraining neural activity. As recordings of extracellular potentials are highly affected by the stimulation artifact, M/EEG recordings before and after tACS have become the standard to investigate neurophysiological effects of tACS in humans. In particular, we recently showed that dual-site tACS can modulate functional connectivity between the targeted regions outlasting the stimulation period, with in-phase stimulation (phase lag zero) increasing connectivity compared to anti-phase stimulation (phase lag π) [1]. Although the mechanism for such after-effects is not known, spike-timing dependent plasticity (STDP) has been proposed as a candidate [2,3]. We aim (1) to find a possible mechanism for our experimentally observed connectivity changes, and (2) to estimate if our dual-site tACS setting can successfully be extended to any stimulation frequency and target area.

We simulated two populations of each 1000 regularly spiking Izhikevich neurons [4] with realistic firing rate distributions. Excitatory connections from each neuron to 100 random neurons of the other population with synaptic delay were subject to an experimentally observed STDP rule [5]. tACS was applied as sinusoidal input currents to both populations with varying phase lags. To validate our model, we correlated experimentally found connectivity changes [1] between targeted sub-regions with the fiber length connecting those sub-regions.

Synaptic weight changes depended on tACS frequency, phase lag, and synaptic delay. For 10 Hz tACS, synaptic weight changes between in- and anti-phase tACS monotonically decreased within the range of physiological cortico-cortical conduction delays. Confirming this finding, our experimental data showed a negative correlation between functional connectivity modulation (in-phase vs anti-phase tACS) and fiber length. Extending the simulations to other tACS frequencies, we find that the expected direction of connectivity modulation can only be expected for low tACS frequencies and small delays or delays that are near multiples of the tACS cycle length.

In conclusion, our experimental findings [1] are in accordance with STDP of synapses between the stimulated regions. Nevertheless, the approach cannot be generalized to all tACS frequencies and delays. Most robust effects are expected for low tACS frequencies and small delays.

Acknowledgements: This work has been supported by DFG, SFB 936/project A3.

References

  1. 1.

    Schwab BC, Misselhorn J, Engel AK. Modulation of large-scale cortical coupling by transcranial alternating current stimulation. Brain stimulation. 2019; 12(5): 1187-96.

  2. 2.

    Wischnewski M, et al. NMDA receptor-mediated motor cortex plasticity after 20 Hz transcranial alternating current stimulation. Cerebral Cortex. 2019; 29(7): 2924-31.

  3. 3.

    Vossen A, Gross J, Thut G. Alpha power increase after transcranial alternating current stimulation at alpha frequency (α-tACS) reflects plastic changes rather than entrainment. Brain stimulation. 2015; 8(3): 499-508.

  4. 4.

    Izhikevich EM. Simple model of spiking neurons. IEEE Transactions on neural networks. 2003; 14(6): 1569-72.

  5. 5.

    Froemke RC, Dan Y. Spike-timing-dependent synaptic modification induced by natural spike trains. Nature. 2002; 416(6879): 433-8.

P53 A simple, non-stationary normalization model to explain and successfully predict change detection in monkey area MT

Detlef Wegener1, Xiao Chen2, Lisa Bohnenkamp2, Fingal O Galashan1, Udo Ernst2

1University of Bremen, Brain Research Institute, Bremen, Germany; 2University of Bremen, Computational Neurophysics Lab, Institute for Theoretical Physics, Bremen, Germany

Correspondence: Detlef Wegener (wegener@brain.uni-bremen.de)

BMC Neuroscience 2020, 21(Suppl 1):P53

Successful visually-guided behavior in natural environments critically depends on rapid detection of changes in visual input. A wildcat chasing a gazelle needs to quickly adapt its motions to sudden direction changes of the prey, and a human driving a fast car on a highway must instantaneously react to the onset of the red brake light of the car in front. Visually responsive neurons represent such rapid feature changes in comparably rapid, transient changes of their firing rate. In the motion domain, for example, neurons in monkey area MT were shown to represent the sign and magnitude of a rapid speed change in the sign and amplitude of the evoked firing rate modulation following that change [1]. For positive speed changes, it was also shown that the transient’s latency closely correlates with reaction time, and is modulated by both spatial and non-spatial visual attention [2,3].

We here introduce a computational model based on a simple, canonical circuit in a cortical hypercolumn. We use the model to investigate the computational mechanisms underlying transient neuronal firing rate changes and their modulation by attention under a wide range of stimulus conditions. It is built of an excitatory and an inhibitory unit, both of which are in response to an external input I(t). The excitatory unit receives additional divisive input from the inhibitory unit. The model’s dynamics is described by two differential equations quantifying how mean activity Ae of the excitatory unit and divisive input current change with time t. By fitting the model parameters to experimental data, we show that it is capable to reproduce the time courses of transient responses under passive viewing conditions. Mathematical analysis of the circuit explains hallmark effects of transient activations and identifies the relevant parameters determining response latency, peak response, and sustained activation. Visual attention is implemented by a simple multiplicative gain to the input of both units.

A key result of the analysis of the model’s dynamics is that steeper rise or decay times of the transient provide a consistent mechanisms of attentional modulation, independent of both the overall activation of the neuron prior to the speed change, and the sign of the change. This prediction is tested by new experiments requiring attention to both positive and negative speed changes. The results of the experiment are in full accordance with the prediction of the model, providing evidence that even decreases in firing rate in response to the reduction of the speed of an attended stimulus occur with shorter latency. Thus, the model provides a unique framework for a mechanistic understanding of MT response dynamics under very different sensory and behavioral conditions.

Acknowledgments: Supported by BMBF grant 01GQ1106 and DFG grant WE 5469/3-1.

References

  1. 1.

    Traschütz A, Kreiter AK, Wegener D. Transient activity in monkey area MT represents speed changes and is correlated with human behavioral performance. Journal of Neurophysiology. 2015; 113: 890-903.

  2. 2.

    Galashan FO, Saßen HC, Kreiter AK, Wegener D. Monkey area MT latencies to speed changes depend on attention and correlate with behavioral reaction times. Neuron. 2013; 78: 740-750.

  3. 3.

    Schledde B, Galashan FO, Przybyla M, Kreiter AK, Wegener D. Task-specific, dimension-based attentional shaping of motion processing in monkey area MT. Journal of Neurophysiology. 2017; 118: 1542-1555.

P54 Non-invasive chronometric modeling of oculomotor processing

Devin Kehoe, Mazyar Fallah

York University, Department of Psychology, Toronto, Canada

Correspondence: Devin Kehoe (dhkehoe@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P54

There is a striking correlation between overt oculomotor behavior and neurophysiological processing in critical oculomotor substrates like the superior colliculus (SC). During oculomotor target selection, unresolved activation [1] or suppression [2] at a competing distractor locus in the epoch ~30 ms prior to saccade initiation elicits saccades curved towards or away from the distractor (respectively). We therefore developed a non-invasive technique to measure the time course of excitatory and inhibitory activity encoding a distractor in which human saccade curvature is modeled as a function of saccade-distractor onset asynchrony (SDOA): the time between the transient onset of a task irrelevant distractor and the initiation of a saccade to a target [3]. The distractor processing time course observed using this technique was closely aligned to the time course of visuomotor neural activity in SC during target selection [4] and we observed time course differences between luminance- and color-modulated distractors that were also in alignment with SC visuomotor cell activity [5].

We expanded the SDOA technique to examine oculomotor processing of complex objects during a perceptual discrimination saccade-task with varied visual similarity between distractors and targets. We saw that the latency of the initial excitatory response was ~60 ms longer for these complicated, task-relevant objects than for the task irrelevant distractors with simple visual features [3]. We also saw differences in the excitatory processing time course for the complex objects, which has critical implications for theories of oculomotor target selection processing. We developed additional analytic techniques that estimate the latency of excitatory processing independent of saccade curvature modeling, which provided consistent temporal estimates. The first analysis examined target selection accuracy as a function of distractor processing time and showed that prior to the estimated time of distractor information being projected into the oculomotor substrates, target selection was guided exclusively by the target representation. The second analysis examined the frequency of saccades as a function of SDOA and demonstrated that immediately after the estimated time of excitatory activity encoding the distractor, there is a transient drop in the likelihood of making a saccade, mirroring the effects of flash suppression. This work confirms the validity of our non-invasive chronometric technique and our original interpretation of it, while also illustrating that SDOA saccade curvature modeling is applicable to more complicated oculomotor target selection contexts.

References

  1. 1.

    McPeek RM, Han JH, Keller EL. Competition between saccade goals in the superior colliculus produces saccade curvature. Journal of Neurophysiology. 2003; 89: 2577–2590.

  2. 2.

    White BJ, Theeuwes J, Munoz DP. Interaction between visual- and goal-related neuronal signals on the trajectories of saccadic eye movements. Journal of Cognitive Neuroscience. 2012; 24: 707–717.

  3. 3.

    Kehoe DH, Fallah M. Rapid accumulation of inhibition accounts for saccades curved away from distractors. Journal of Neurophysiology. 2017; 118: 832–844.

  4. 4.

    McPeek RM, Keller EL. Saccade target selection in the superior colliculus during visual search task. Journal of Neurophysiology. 2002; 88: 2019–2034.

  5. 5.

    White BJ, Boehnke SE, Marino RA, Itti L, Munoz DP. Color-related signals in the primate superior colliculus. Journal of Neuroscience. 2009; 29: 12159–12166.

P55 Identifying changes in whole-brain functional connectivity in complex longitudinal clinical trials

Sidhant Chopra1, Kristina Sabaroedin2, Shona Francey3, Brian O’Donoghue3, Vanessa Cropley4, Barnaby Nelson3, Jessica Graham3, Lara Baldwin3, Steven Tahtalian4, Hok Pan Yuen3, Kelly Allott3, Mario Alvarez3, Susy Harrigan3, Christos Pantelis4, Stephen Wood3, Patrick McGorry3, Alex Fornito5

1Monash University, Turner Institute for Brain and Mental Health, Melbourne, Australia; 2Monash University, Melbourne, Australia; 3Orygen Youth Health, Melbourne, Australia; 4Melbourne Neuropsychiatry Centre, Melbourne, Australia; 5Monash University, The Turner Institute for Brain and Mental Health, School of Psychological Sciences and Monash Biomed, Melbourne, Australia

Correspondence: Sidhant Chopra (sid.chopra@monash.edu)

BMC Neuroscience 2020, 21(Suppl 1):P55

Resting-state Functional Magnetic Resonance Imaging (rs-fMRI) is increasingly being used as a secondary measure in complex clinical trials [1]. The inclusion of rs-fMRI allows researchers to investigate the impact interventions, such as medication, can have on regional and network-level brain hemodynamics. Such trials are expensive, difficult to conduct, often have small samples in rare clinical populations and high attrition rates. Standard neuroimaging analysis software are not usually suited to these sub-optimal design parameters. Accessible statistical tools that are robust to these conditions are much needed.

We propose an analysis workflow, which combines 1) ordinary least squares marginal model with a robust covariance estimator to account for within-subject correlation, 2) nonparametric p-value inference using a novel bootstrapping method [2] and, 3) edge- and component-level family-wise error (FWE) control using the Network Based Statistic [3]. This workflow has several advantages, including being robust to unbalanced longitudinal samples, small-sample correction using heteroskedasticity-consistent standard errors and simplified nonparametric inference. Additionally, this method is computationally less demanding than traditional mixed-linear models and does not bias the analysis by pre-selecting regions of interest.

We apply this novel workflow to a world-first triple-blind longitudinal placebo-controlled trial where 62 antipsychotic-naïve people aged between 15 to 24 with first-episode psychosis received either an atypical antipsychotic or a placebo pill over a treatment period of 6 months. Both patient groups received intensive psychosocial therapy. A third healthy control group with no psychiatric diagnosis (n=27) was also recruited. rs-fMRI scans were acquired at baseline, 3-months and 12-months. We show that our analysis method is sufficiently sensitive to detect FWE-corrected significant components in this complex three groups [healthy control, placebo, medication] by three time points [baseline, 12-weeks, 52-weeks] design.

Here, we introduce an analysis workflow which is capable of detecting changes in resting-state functional networks in complex clinical trials with multiple timepoints and unbalanced groups. This analysis workflow is freely available as an R function::netSandwich.

References

  1. 1.

    O’Donoghue B, et al. Staged treatment and acceptability guidelines in early psychosis study (STAGES): A randomized placebo controlled trial of intensive psychosocial treatment plus or minus antipsychotic medication for first‐episode psychosis with low‐risk of self‐harm or aggression. Study protocol and baseline characteristics of participants. Early Intervention in Psychiatry. 2019; 13(4): 953-960.

  2. 2.

    Guillaume B, et al. Improving mass-univariate analysis of neuroimaging data by modelling important unknown covariates: Application to Epigenome-Wide Association Studies. NeuroImage. 2018; 173: 57-71.

  3. 3.

    Zalesky A, Fornito A, Bullmore ET. Network-based statistic: identifying differences in brain networks. Neuroimage. 2010; 53(4): 1197-1207.

P56 Impact of simulated asymmetric interregional cortical connectivity on the local field potential

David Boothe, Alfred Yu, Kelvin Oie, Piotr Franaszczuk

U.S. Army Research Laboratory, Human Research Engineering Directorate, Aberdeen Proving Ground, Maryland, United States of America

Correspondence: David Boothe (david.l.boothe7.civ@mail.mil)

BMC Neuroscience 2020, 21(Suppl 1):P56

Spontaneous neuronal activity as observed using electroencephalogram is characterized by a non-stationary 1/f power spectrum interspersed with periods of rhythmic activity [1]. Underlying cortical neuronal activity is, by contrast, hypothesized to be sparse and arrhythmic [2]. Properties of cortical neuronal connectivity such as sparsity, small world organization, and conduction delays have all been proposed to play a critical role in the generation of spontaneous brain activity. However, the relationship between the structure reflected in measures of global brain activity, the underlying neuronal activity, and neuronal connectivity is, at present, poorly characterized. In order to explore the role of cortical connectivity in the generation of spontaneous brain activity, we present a simulation of cerebral cortex based on the Traub model [3] implemented in the GENESIS neuronal simulation environment.

We made extensive changes to the original Traub model in order to more faithfully reproduce the spontaneous cortical activity described here. We re-tuned the original Traub parameters to eliminate both intrinsic neuronal activity and removed the gap junctions. Tuning out intrinsic neuronal activity in the model allowed changes to the underlying connectivity to be the central factor in modifying overall model activity. The model we present consists of 16 simulated cortical regions each containing 976 neurons (15,616 neurons total). Previously we connected simulated regions in a nearest neighbor fashion via short range association fibers. These association fibers originated from pyramidal cells in cortical layer 2/3 (P23s). We found that the introduction of symmetric bidirectional inter-regional connectivity was sufficient to induce both a 1/f power spectrum as well as oscillatory behavior in the local field potential of the underlying cortical regions in the 2 to 40 Hz range. However we also found that sub-region activity was fairly uniform, even if these sub-region oscillations were not strongly correlated with one another. We hypothesize that introducing asymmetric inter-regional connectivity in this model may produce underlying simulated neuronal activity that is more variable in its output and more similar to the output observed in the biological system..

Connectivity between cortical regions in the biological brain are often asymmetric with outputs of layer 2/3 pyramidal cells terminating in different layers and in different proportions on receiving regions of cortex [4].

Here we explore how these asymmetrical connectivity schema alter microscopic (spikes) and macroscopic (local field potential) features of our cortical simulations. We re-organized our 16 simulated cortical regions in a hierarchical fashion using feedforward and feedback connectivity patterns observed between regions of the visual system [4]. We then compare the behavior of this network to our previous simulations using nearest neighbor and small world like inter-regional connectivity. We hypothesize that networks with asymmetric connectivity between regions will give richer and more heterogenous model outputs.

References

  1. 1.

    Le Van Quyen M. Disentangling the dynamic core: a research program for a neurodynamics at the large-scale. Biological Research. 2003; 36(1): 67-88.

  2. 2.

    Buzsáki G. Rhythms of the Brain: Oxford University Press; 2006.

  3. 3.

    Traub RD, et al. Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts. Journal of Neurophysiology. 2005; 93(4): 2194-232.

  4. 4.

    Salin PA, Bullier J. Corticocortical connections in the visual system: structure and function. Physiological Reviews. 1995; 75(1): 107-54.

P57 Motor cortex encodes a value function consistent with reinforcement learning

Venkata Tarigoppula1, John Choi2, John Hessburg2, David McNiel2, Brandi Marsh2, Joseph Francis2

1University of Melbourne, Biomedical Engineering, Melbourne, Australia; 2State University of New York Downstate Medical Center, Physiology and Pharmacology, New York, United States of America

Correspondence: Venkata Tarigoppula (adi.tarigoppula@unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P57

Reinforcement learning (RL) theory provides a simple model that can help explain many animal behaviors. RL models have been very successful in describing the neural activity in multiple brain regions and at several spatiotemporal scales ranging from single units up to hemodynamics during the learning process in animals including humans. A key component of RL is the value function, which captures the expected, temporally discounted reward, from a given state. A reward prediction error occurs when there is a discrepancy between the value function and actual reward, and this error is used to drive learning. The value function can also be modified by the animal’s knowledge and certainty of its environment. Here we show that the bilateral primary motor cortical (M1) neural activity in non-human primates (Rhesus and Bonnet macaques either sex) encodes a value function in line with temporal difference RL. M1 responds to the delivery of unpredictable reward (unconditional stimulus (US)), and shifts its value related response earlier in a trial, becoming predictive of expected reward, when reward is predictable due to the presence of an explicit cue (conditional stimulus (CS)). This is observed in tasks performed manually or observed passively and in tasks without an explicit CS, but with a predictable temporal reward environment. M1 also encodes the expected reward value in a multiple reward level CS-US task. Here we extend the Microstimulus temporal difference RL model (MSTD), reported to accurately capture RL related dopaminergic activity, to account for both phasic and tonic M1 reward-related neural activity in a multitude of tasks, during manual trials, as well as observational trials. This information has implications towards autonomously updating brain-machine interfaces.

P58 Systematic testing and validation of models of hippocampal neurons against electrophysiological data

Sára Sáray1, Andrew Davison2, Szabolcs Kali3, Christian Rössert4, Shailesh Appukuttan5, Eilif Muller4, Tamás Freund3

1Pázmány Péter Catholic University, Faculty of Information Technology and Bioinics, Budapest, Hungary; 2Centre National de la Recherche Scientifique, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France; 3Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary; 4École Polytechnique Fédérale de Lausanne, Blue Brain Project, Geneva, Switzerland; 5Centre National de la Recherche Scientifique/ Université Paris-Saclay, Paris-Saclay Institute of Neuroscience, Gif-sur-Yvette, France

Correspondence: Sára Sáray (saraysari@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P58

Anatomically and biophysically detailed neuronal models, that are built in a data-driven manner, are useful tools in understanding and predicting the behavior and function of the different cell types of the brain. Due to the growing number of computational and software tools and the increasing body of experimental data from electrophysiological measurements, that enable more accurate neuronal modeling, there is a constantly increasing number of different models of many cell types available in the literature. These are usually developed using different methods and for different purposes, most often to reproduce the results of a few selected experiments, and it is often unknown how they would behave in other situation or whether they are able to generalize outside their original scope. This might be the reason why it is uncommon in the modelling community to re-use and further develop already existing models, which prevents the construction of consensus “community models” that could capture an increasing proportion of the electrophysiological properties of the given cell type. In addition, even when models are re-used they may lose their ability to capture their originally adjusted behavior while their parameters are retuned to make them fit another subset of experimental data.

The collaborative approach of model development requires extensive validation test suites which enables modelers to evaluate their models against experimental observations according to standardized criteria and to explore the changes in model behavior at the different stages of its development. Applying automated tests also facilitates optimal model re-use and co-operative model development by making it possible to learn more about models published by other groups (beyond the results included in the papers) with relatively little effort.

Initially we addressed this issue by developing an open-source Python test suite, called HippoUnit (https://github.com/KaliLab/hippounit) for the automated and systematic validation and quantitative comparison of the behavior of models of the hippocampal CA1 pyramidal cells (which is one of the most studied cell type) against electrophysiological data. We applied HippoUnit to test and compare the behavior of several different hippocampal CA1 pyramidal cell models available on ModelDB (results are available at: https://github.com/KaliLab/HippoUnit_demo). We also employed the test suite to aid the development of models within the Human Brain Project (HBP) and integrated the tests into the validation framework developed in the HBP.

Currently we are extending this test suite by adding new tests for the validation of other important hippocampal cell types. New validation tests cover somatic behavior and signal propagation in dendrites of basket cells and CA3 pyramidal cells, and the propagation of action potential in the axon of basket cells.

By presenting these results we hope to encourage the modeling community to use more systematic testing during model development, in order to create neural models that generalize better, and make the process of model building more reproducible and transparent.

Acknowledgments: Supported by the ÚNKP-19-3-III New National Excellence Program of the Ministry for Innovation and Technology; European Social Fund (EFOP-3.6.3-VEKOP-16-2017-00002); the European Union’s Horizon 2020 Framework Programme for Research and Innovation (Specific Grant Agreements No. 720270, 785907 - Human Brain Project SGA1, SGA2).

P59 Reverse engineering neural networks to identify their cost functions and implicit generative models

Takuya Isomura1, Karl Friston2

1RIKEN Center for Brain Science, Saitama, Japan; 2University College London, London, United Kingdom

Correspondence: Takuya Isomura (takuya.isomura@riken.jp)

BMC Neuroscience 2020, 21(Suppl 1):P59

It is widely recognised that maximising a variational bound on model evidence – or equivalently, minimising variational free energy – provides a unified, normative formulation of inference and learning [1]. According to the complete class theorem [2], any dynamics that minimises a cost function can be viewed as performing Bayesian inference; implying that any neural network whose activity and plasticity follow the same cost function is implicitly performing Bayesian inference. However, the implicit Bayesian model that corresponds to any given cost function is a more delicate problem. Here, we identify a class of biologically plausible cost functions for canonical neural networks of rate coding neurons, where the same cost function is minimised by both neural activity and plasticity [3]. We then demonstrate that such cost functions can be cast as variational free energy under an implicit generative model in the well-known form of partially observed Markov decision processes. This equivalence means that the activity and plasticity in a canonical neural network can be understood as approximate Bayesian inference and learning, respectively. Mathematical analysis shows that the firing thresholds – that characterise the neural network cost function – correspond to prior beliefs about hidden states in the generative model. This means that the Bayes optimal encoding of hidden states is attained when the network’s implicit priors match the process generating its sensory inputs. The theoretical formulation was validated using in vitro neural networks comprising rat cortical cells cultured on a microelectrode array dish [4,5]. We observed that in vitro neural networks – that receive input stimuli generated from hidden sources – perform causal inference or source separation through activity-dependent plasticity. The learning process was consistent with Bayesian belief updating and the minimisation of variational free energy. Furthermore, constraints that characterise the firing thresholds were estimated from the empirical data to quantify the in vitro network’s prior beliefs about hidden states. These results highlight the potential utility of reverse engineering generative models to characterise the neuronal mechanisms underlying Bayesian inference and learning.

References

  1. 1.

    Friston K. The free-energy principle: a unified brain theory?. Nature Reviews Neuroscience. 2010; 11(2): 127-38.

  2. 2.

    Wald A. An essentially complete class of admissible decision functions. The Annals of Mathematical Statistics. 1947; 549-55.

  3. 3.

    Isomura T, Friston K. Reverse engineering neural networks to characterise their cost functions. bioRxiv. 2019; 654467.

  4. 4.

    Isomura T, Kotani K, Jimbo Y. Cultured cortical neurons can perform blind source separation according to the free-energy principle. PLoS Computational Biology. 2015; 11(12): e1004643.

  5. 5.

    Isomura T, Friston K. In vitro neural networks minimise variational free energy. Scientific Reports. 2018; 8(1): 1-4.

P60 Cholinergic modulation can produce rapid task-related plasticity in the auditory cortex

Jordan Chambers1, Shihab Shamma2, Anthony Burkitt1, David Grayden1, Diego Elgueda3, Jonathan Fritz4

1University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia; 2University of Maryland, Institute for Systems Research, College Park, Maryland, United States of America; 3Universidad de Chile, Facultad de Ciencias Veterinarias y Pecuarias, Santiago, Chile; 4New York University, Center for Neural Science, New York, United States of America

Correspondence: Jordan Chambers (jordanc@unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P60

Neurons in the primary auditory cortex (A1) display rapid task-related plasticity, which is believed to enhance the ability to selectively attend to one stream of sound in complex acoustic scenes. Previous studies have suggested that cholinergic projections from Nucleus Basalis to A1 modulate auditory cortical responses and may be a key component of rapid task related plasticity. However, the underlying molecular, cellular and network mechanisms of cholinergic modulation of cortical processing remain unclear.

A previously published model of A1 receptive fields [1] that can reproduce task-related plasticity was used to investigate mechanisms of cholinergic modulation in A1. The previous model comprised a cochlea model and integrate-and-fire model neurons to represent networks in A1. Action potentials from individual model neurons were used to calculate the receptive field using reverse correlation, which allowed direct comparison to experimental data. To allow an investigation into different mechanisms of cholinergic modulation at A1, this previous model was extended by: (1) adding integrate-and-fire neurons to represent neurons projecting from Nucleus Basalis to A1; (2) adding inhibitory interneurons in A1; (3) including internal calcium dynamics in the integrate-and-fire models; and (4) including calcium-dependent potassium conductance in the integrate-and-fire models. Since cholinergic modulation has several potential sites of action in A1, the current model was used to investigate acetylcholine acting through both muscarinic and nicotinic acetylcholine receptors (mAChR and nAChR, respectively) located presynaptically or postsynaptically.

Four possible mechanisms of cholinergic modulation on A1 receptive fields were investigated. Previous research indicates cholinergic modulation should be able to suppress an inhibitory region and enhance an excitatory region in the receptive fields [2]. Our model indicates it is unlikely that any one of these four mechanisms could produce these opposite changes to both excitatory and inhibitory regions. However, multiple mechanisms occurring simultaneously could produce the expected changes to the receptive fields in this model. We demonstrate that combining either presynaptic nAChR with presynaptic mAChR or presynaptic nAChR with postsynaptic nAChR is capable of producing changes to A1 receptive fields observed during rapid task-related plasticity.

This model tested four mechanisms by which cholinergic modulation may induce rapid task-related plasticity in A1. Cholinergic modulation could reproduce experimentally observed changes to A1 receptive fields when it was implemented using a combination of mechanisms. Two different combinations of cholinergic modulation were found to produce the expected changes in A1 receptive fields. Since the model predicts that these two different combinations of cholinergic modulation would have differential effects on the rate of neuronal firing, it will be possible to run experimental tests to distinguish between the two theoretic possibilities.

References

  1. 1.

    Chambers JD, et al. Computational Neural Modeling of Auditory Cortical Receptive Fields. Frontiers in Computational Neuroscience. 2019; 13: 28.

  2. 2.

    Fritz J, Shamma S, Elhilali M, Klein D. Rapid task-related plasticity of spectrotemporal receptive fields in primary auditory cortex. Nature neuroscience. 2003; 6(11): 1216-23.

P61 Self-supervision mechanism of multiple dendritic compartments for temporal feature learning

Milena M Carvalho1, Tomoki Fukai2

1University of Tokyo, Department of Complexity Science and Engineering, Tokyo, Japan; 2Okinawa Institute of Science and Technology Graduate University, Neural Coding and Brain Computing Unit, Okinawa, Japan

Correspondence: Milena M Carvalho (glowingsea@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P61

In recent years, there has been a surge of interest in how dendrites and their complex geometry allow for integration of spatiotemporal patterns of input and transformation into neuronal response [1,2]. Although sizeable amounts of synaptic input arrive at distinct branches, the dendritic tree must convert this composite signal into meaningful information, which will be then transferred to the soma and potentially evoke spiking activity [3]. While there has been considerable development in dendritic integration modeling in the last two decades, single-compartment neurons are still a hallmark of computational neuroscience and machine learning [2]; this level of abstraction, however, ignores components of dendritic integration that may be essential to properly represent neuronal dynamics, hence limiting the performance of the studied systems.

Some of these shortcomings were recently overcome by a two-compartment model that introduced a self-supervision rule within a single neuron to minimize information loss between dendritic synaptic input and somatic output spiking activity [4]. Networks composed of this neuron model could perform a variety of unsupervised temporal feature learning tasks such as chunking and blind source separation, usually performed by specialized networks with different learning rules. We wish to generalize this learning principle and develop a new framework in which dendritic trees have two or more compartments with hierarchical, linear-nonlinear integrations [3]. Here we investigate how can self-supervision be defined in this system and examine its accuracy when presented to the previously introduced temporal feature learning tasks. We expect that, by distributing the synaptic input into different compartments of the same neuron, our model can use delayed integration to differentiate similar temporal patterns that were previously indistinguishable.

Acknowledgments: This work was partly supported by KAKENHI (nos. 18H05213 and 19H04994). We would like to thank the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan and Okinawa Institute of Science and Technology Graduate University (OIST) for supporting M. M. C.

References

  1. 1.

    Stuart GJ, Spruston N. Dendritic integration: 60 years of progress. Nature neuroscience. 2015; 18(12): 1713-21.

  2. 2.

    Häusser M, Mel B. Dendrites: bug or feature? Current Opinion in Neurobiology. 2003; 13(3): 372-383.

  3. 3.

    Ujfalussy BB, Makara JK, Lengyel M, Branco T. Global and Multiplexed Dendritic Computations under In Vivo-like Conditions. Neuron. 2018; 100(3): 579 - 592.e5

  4. 4.

    Asabuki T, Fukai T. Somatodendritic consistency check for temporal feature segmentation. Nature Communications. 2020; 11: 1554.

P62 Large scale discrimination between neural models and experimental data

Russell Jarvis1, Sharon Crook2, Richard Gerkin3

1Arizona State University, Neuroscience, Tempe, Arizona, United States of America; 2Arizona State University, School of Mathematical and Statistical Sciences, Tempe, Arizona, United States of America; 3Arizona State University, School of Life Sciences, Tempe, Arizona, United States of America

Correspondence: Russell Jarvis (rjjarvis@asu.edu)

BMC Neuroscience 2020, 21(Suppl 1):P62

Scientific insight is well-served by the discovery and optimization of abstract models that can reproduce experimental findings. NeuroML (NeuroML.org), a model description language for neuroscience, facilitates reproducibility and exchange of such models by providing an implementation-agnostic model description in a modular format. NeuronUnit (neuronunit.scidash.org) evaluates model accuracy by subjecting models to experimental data-driven validation tests, a formalization of the scientific method.

A neuron model that perfectly imitated real neuronal electrical behavior in response to any stimulus would not be distinguishable from experiments by any conventional physiological measurement. In order to assess whether existing neuron models approached this standard, we took 972 existing neuron models from NeuroML-DB.org and subjected them to a standard series of electrophysiological stimuli (somatic current injection waveforms). We then extracted analogous 448 stimulus-evoked recordings of real cortical neurons from the Allen Cell Types database. We applied multiple feature extraction algorithms on the physiological responses of both model simulations and experimental recordings in order to characterize physiological behavior with a very high degree of detail spanning hundreds of features.

After applying dimensionality reduction to this very high dimensional feature space, we show that the real (biological neurons) and simulated (model neurons) recordings are easily and fully discriminated by eye or any reasonable classifier. Consequently, not a single model neuron produced physiological responses that could be confused with a biological neuron. Was this a defect of the model design (e.g. key mechanisms unaccounted for) or of model parameterization? We found that if we introduced models that were revised via optimization the revised models overlapped with the distribution of biological neurons, and were mostly classified as such. The remaining post-optimization disagreement between models and biological neurons may reflect limitations of model design and can be investigated by probing the key features used by classifiers to distinguish these two populations.

P63 Morphological determinants of neuronal dynamics and function

Christoph Kirch1, Leonardo Gollo2

1QIMR Berghofer, Brisbane, Australia; 2Monash University, Melbourne, Australia

Correspondence: Christoph Kirch (ckirc4@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P63

Neurons use their complex dendritic tree to integrate complex spatio-temporal patterns of incoming signals. The nonlinear interactions of spikes along the bifurcating branches allows the neuron to perform dendritic computations [1]. While models often aim to realistically simulate the complicated chemical properties of these signals, they often do this at the cost of simplifying the spatial structure to one (e.g., Hodgkin Huxley, integrate-and-fire) [2] or few compartments [3]. These simplified structures do not accurately represent the morphology, and thus it is not possible to infer how the dendritic structure can shape a neuron’s output.

Here, we used detailed neuron reconstructions from the online database NeuroMorpho.Org [4]. Each neuron is made up of one somatic compartment and up to 10,000 dendritic compartments. By treating these compartments as an excitable network [5, 6], we could apply a simple discrete model of dendritic spike propagation to investigate how the morphology affects the firing behavior of a neuron.

Our approach allows for a detailed analysis of the neuron’s dendritic activity pattern. For example, we can generate spatial heatmaps of firing rate, revealing a significant spatial dependence of dynamics. By comparing the compartmental activity for different strengths of external stimulus, we can investigate the dynamic range – over what range of input strength a compartment’s firing rate is varied the most. We find that dendritic bifurcations boost the local dynamic range. Thus, a soma located in densely bifurcated regions tends to have large dynamic ranges.

Identifying how effectively a neuron utilizes its dendritic tree to amplify stimuli can be achieved by comparing the average dendritic compartment activity against how often the soma fires. Since it takes energy to control the ion channels responsible for dendritic spikes, we call this ratio the relative energy consumption of the neuron. If it is <1, the neuron is energy efficient. Conversely, if it is >1, the neuron is inefficient. We identified two morphological features – the number of somatic branches, and the centrality of the soma – that can be used to categorize the energy behavior of neurons (Fig. 1).

Fig. 1
figure26

Neurons can be classified into functional groups based on two morphological features; the number of branches connected to the soma, and its centrality. Type 1 corresponds to energy efficient behaviour (energy < 1) across the parameter space of h and P. Type 2 exhibits mixed behaviour. Type 3 neurons are inefficient (energy > 1). Type T represents a transitional category with mixed behaviour

The classification scheme we have proposed provides an important testable basis in explaining structural differences across neurons. For example, depending on the required computational function, certain features of the dendritic tree would be more favorable. Our model can be applied to any of the 100,000+ reconstructions available at NeuroMorpho.Org, and can be extended to investigate the effect of changes in dendritic structure.

References

  1. 1.

    London M, Häusser M. Dendritic computation. Annual Review of Neuroscience. 2005; 28: 503-32.

  2. 2.

    Izhikevich EM. Simple model of spiking neurons. IEEE Transactions on Neural Networks. 2003; 14(6): 1569-72.

  3. 3.

    Izhikevich EM. Dynamical systems in neuroscience. MIT press; 2007.

  4. 4.

    Kirch C, Gollo LL. Spatially resolved dendritic integration: Towards a functional classification of neurons. bioRxiv. 2020; 657403.

  5. 5.

    Gollo LL, Kinouchi O, Copelli M. Active dendrites enhance neuronal dynamic range. PLoS Computational Biology. 2009; 5(6): e1000402.

  6. 6.

    Gollo LL, Kinouchi O, Copelli M. Statistical physics approach to dendritic computation: The excitable-wave mean-field approximation. Physical Review E. 2012; 85(1): 011911.

P64 Gamma oscillations organized as localized burst patterns with anomalous propagation dynamics in primate cerebral cortex

Xian Long1, Yuxi Liu1, Paul R Martin2, Samuel G Solomon3, Pulin Gong1

1University of Sydney, School of physics, Sydney, Australia; 2University of Sydney, Save Sight Institute, Sydney, Australia; 3University College London, Department of Experimental Psychology, London, United Kingdom

Correspondence: Xian Long (longxian319@hotmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P64

Gamma oscillations (30-80 Hz) occur in transient bursts with varying frequencies and durations. These non-stationary gamma bursts have been widely observed in many brain areas but have rarely been quantitatively characterized, and the mechanisms that produce them are not understood. In this study we investigate the spatiotemporal properties of gamma bursts through combined empirical and modeling investigation. Our array recordings of local field potentials in visual cortical area MT of the marmoset monkey reveal that gamma bursts form localized patterns with complex propagation dynamics. We also show that the propagations of these patterns are characterized by anomalous dynamics that are fundamentally different from regular or Brownian motions conventionally assumed. We show that all aspects of these anomalous dynamics can be quantitatively captured by a spatially extended, biophysically realistic circuit model. Circuit dissection of the model shows further that the anomalous dynamics rely on the intrinsic meta-stability near the critical transition between different circuit states (i.e. between synchronous and the regular propagating wave states). Our results thus reveal novel spatiotemporal organization properties of gamma bursts, and explain them in terms of underlying circuit mechanisms, providing new computational functions for gamma oscillations.

P65 Dynamical circuit mechanisms of attentional sampling

Guozhang Chen, Pulin Gong

The University of Sydney, School of Physics, Sydney, Australia

Correspondence: Guozhang Chen (gche4213@uni.sydney.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P65

Selective attention can sift out particular objects or features from the plethora of stimuli. Such preferential processing of attention is often compared to a spotlight pausing to illuminate relevant targets in visual fields in a stimulus-driven way (bottom-up attention) and/or task-driven way (top-down attention). Recent studies have revealed that bottom-up distributed attention involving multiple objects is not a sustained spotlight, but samples the visual environment in a fundamentally dynamical manner with theta-rhythmic cycles, with each sampling cycle being implemented through gamma oscillations. However, the fundamental questions regarding the dynamical nature and the circuit mechanism underlying such dynamical attentional sampling remain largely unknown. To address these questions, in this study we investigate a biophysically plausible cortical circuit model of spiking neurons and find that in the working regime of the model (i.e. the regime near the critical transition between the asynchronous and propagating wave states), the localized activity pattern emerging from the circuit exhibits rich spatiotemporal dynamics. We illustrate that the nonequilibrium nature of the localized pattern enables the circuit to dynamically shift to different salient external inputs, without introducing additional neural mechanisms such as inhibition of return as in the conventional winner-take-all models of attention. We elucidate that the dynamical shifting process of the activity pattern provides a mechanistic account of key neurophysiological and behavioral findings on attention, including theta oscillations, theta-gamma phase-amplitude coupling, and vigorous-faint spiking fluctuations. Furthermore, by using the saliency maps of natural stimuli, we demonstrate that the nonequilibrium activity pattern dynamics can better explain the psychophysical findings regarding attention maps and attention sampling paths than the conventional models, providing a profound computational advantage for efficiently sampling external environments. Our work thus establishes a novel circuit mechanism by which non-equilibrium, fluctuating pattern dynamics near the critical transition of circuit states can be exploited for implementing efficient attentional sampling.

P66 Brain-computer interfaces using stereotactic electroencephalography: identification of discriminative recording sites for decoding imagined speech

Kevin Meng1, David Grayden1, Mark Cook2, Farhad Goodarzy3

1University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia; 2University of Melbourne, Department of Medicine, Melbourne, Australia; 3University of Melbourne, Department of Medicine, Dental and Health Sciences, Melbourne, Australia

Correspondence: Kevin Meng (ksmeng@student.unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P66

As part of the monitoring of medication-resistant epilepsy before resective surgeries, patients are implanted with electrocorticography (ECoG) electrode arrays placed on the surface of the cortex or stereotactic electroencephalography (SEEG) depth electrodes penetrating the cortex. Both recording modalities measure local field potentials (LFPs) from their respective target locations. The patients are occasionally recruited to voluntarily participate in brain-computer interface (BCI) research. In recent years, ECoG-based BCIs have demonstrated long-term reliable decoding of various cortical processes involved in mental imagery tasks. Despite similarities in terms of clinical application and decoding strategies, SEEG-based BCIs have been the focus of only a limited number of studies. While the sparsity of their cortical coverage represents a disadvantage, SEEG depth electrodes have the potential to target bilateral combinations of deeper brain structures that are inaccessible with ECoG [1].

Here, we propose a framework for SEEG-based BCIs to identify discriminative recording sites for decoding imagined speech. Four patients with epilepsy were implanted with 7 to 11 SEEG depth electrodes, each consisting of 8 to 15 recording sites. Electrode placement and duration of monitoring were solely based on the requirements of clinical evaluation. Signals were amplified and recorded at a sampling rate of 5 kHz. The task consisted of listening to utterances and producing overt and covert (imagined) utterances of a selection of 20 monosyllabic English words made up of all combinations of five consonant patterns (/b_t/, /m_n/, /r_d/, /s_t/, /t_n/) and four vowels (/æ/, /ε/, /i:/, /u:/).

We determined the relative importance of recording sites based on classification accuracies obtained from features extracted at the corresponding electrode locations. Each trial was associated with a label (consonant pattern or vowel) and a set of features consisting of normalized log-transformed power spectral densities at different time points and selected frequency bands: delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz), beta (12-30 Hz), gamma 1 (30-45 Hz), gamma 2 (55-95 Hz), gamma 3 (105-145 Hz), and gamma 4 (155-195 Hz). A pair-wise classification model using logistic regression was used to predict the labels. Parameters were trained for different combinations of recording sites, as well as each condition (listening, overt, covert), patient, and pair of labels separately. The mean classification rate across all pairs of labels was calculated to quantify the discriminative power of individual and combined recording sites.

Our results consistently show across all patients that relevant depth electrodes for decoding imagined speech are found in both left and right superior temporal gyri. Anatomical analyses of these electrode locations revealed that recording sites in the grey matter were the most discriminative. This is in line with previous studies of speech BCIs [2]. In addition to providing a better understanding of the neural processes underlying imagined speech, our practical framework may be applied to reduce feature dimensionality and computational cost while improving accuracy in real-time SEEG-based BCI applications.

References

  1. 1.

    Herff C, Krusienski DJ, Kubben P. The Potential of Stereotactic-EEG for Brain-Computer Interfaces: Current Progress and Future Directions. Frontiers in Neuroscience. 2020; 14: 123.

  2. 2.

    Cooney C, Folli R, Coyle D. Neurolinguistics research advancing development of a direct-speech brain-computer interface. IScience. 2018; 8: 103-25.

P67 A computational model to inform presurgical evaluation of epilepsy from scalp EEG

Marinho Lopes1, Leandro Junges2, Luke Tait3, John Terry2, Eugenio Abela4, Mark Richardson4, Marc Goodfellow5

1Cardiff University, Cardiff, United Kingdom; 2University of Birmingham, Centre for Systems Modelling and Quantitative Biomedicine, Birmingham, United Kingdom; 3Cardiff University, Cardiff University Brain Research Imaging Centre, Cardiff, United Kingdom; 4King’s College London, Institute of Psychiatry, Psychology and Neuroscience, London, United Kingdom; 5University of Exeter, Living Systems Institute, Exeter, United Kingdom

Correspondence: Marinho Lopes (m.lopes@exeter.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P67

Epilepsy affects an estimated fifty million people worldwide. Approximately one third do not respond to anti-epileptic medication and are therefore potential candidates for alternative treatments such as epilepsy surgery. Surgery aims to remove the epileptogenic zone (EZ), the brain area responsible for the generation of seizures. Epilepsy surgery is thus preceded by an evaluation to determine the location of the EZ. A number of brain imaging modalities may be used in this evaluation, namely scalp electroencephalography (EEG) and magnetic resonance imaging (MRI), possibly followed by invasive intracranial EEG. The effectiveness of intracranial EEG to inform epilepsy surgery depends on where electrodes are implanted. This decision is informed by noninvasive recording modalities such as scalp EEG. The decision is frequently not trivial because scalp EEG may provide inconclusive or even contradictory predictions of the EZ location. A poor hypothesis based on noninvasive data may lead to an incorrect placement of intracranial electrodes, which in turn may make surgery ill-advised and potentially unsuccessful if performed [1].

Here we propose a framework to interrogate scalp EEG and determine epilepsy lateralization to aid in electrode implantation [2]. We used eLORETA to map source activities from seizure epochs recorded from scalp EEG and obtained functional networks using the phase-locking value (PLV). The networks were then studied using a mathematical model of epilepsy (a modified theta model to represent a network of interacting neural masses [2,3]). By removing different regions of interest from the network and simulating their impact on the network’s ability to generate seizures in silico, the framework provides predictions of epilepsy lateralization. We considered 15 individuals from the EPILEPSIAE database and studied a total of 62 seizures. Results were assessed by taking into account actual intracranial implantations and postsurgical outcome. The framework proved useful in assessing epilepsy lateralization in 12 out of 15 individuals considered. These results show promise for the use of this framework to better interrogate scalp EEG and aid clinicians in presurgical assessment of people with epilepsy.

References

  1. 1.

    Jayakar P, et al. Diagnostic utility of invasive EEG for epilepsy surgery: indications, modalities, and techniques. Epilepsia. 2016; 57(11): 1735-47.

  2. 2.

    Lopes MA, et al. Computational modelling in source space from scalp EEG to inform presurgical evaluation of epilepsy. Clinical Neurophysiology. 2020; 131(1): 225-34.

  3. 3.

    Lopes MA, et al. An optimal strategy for epilepsy surgery: Disruption of the rich-club?. PLoS Computational Biology. 2017; 13(8) :e1005637.

P68 Large-scale calcium imaging of spontaneous activity in larval zebrafish reveals signatures of criticality

Michael McCullough1, Robert Wong1, Zac Pujic1, Biao Sun1, Geoffrey J Goodhill1,2

1University of Queensland, Queensland Brain Institute, St Lucia, Queensland, Australia; 2University of Queensland, School of Mathematics and Physics, St Lucia, Queensland, Australia

Correspondence: Geoffrey J Goodhill (g.goodhill@uq.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P68

Neural networks in the brain may self-organise such that they operate near criticality, that is, poised on the boundary between phases of order and disorder [1]. Models of neural networks tuned close to criticality are optimal in terms of dynamic range, information transmission, information storage and computational adaptability [2]. Most experimental evidence for criticality in the brain has come from studies of high resolution neural spiking data recorded from tissue cultures or anaesthetised animals using microelectrode arrays, or from studies of mesoscopic-scale neural activity using magnetic resonance imaging or electroencephalograms. These approaches are inherently limited either by under-sampling of the neural population or by coarse spatial resolution. This can be problematic for empirical studies of criticality because the characteristic dynamics of interest are theoretically scale-free.

Recently, Ponce-Alvarez et al. [3] investigated the larval zebrafish as a new model for neural criticality by utilising the unique properties of the organism that enable whole-brain imaging of neural activity in vivo and without anaesthetic. They identified hallmarks of neural criticality in larval zebrafish using 1-photon calcium imaging and voxel-based analysis of neuronal avalanches. Here we addressed two key limitations of their study by instead using 2-photon calcium imaging to observe truly spontaneous activity, and by extracting neural activity time series at single-cell resolution via state-of-the-art image segmentation [4]. Our data comprise fluorescence time series for large populations of neurons from 3-dimensional volumetric recordings of spontaneous activity in the optic tectum and cerebellum of larval zebrafish with pan-neuronal expression of GCaMP6s (n=5; approx. 10000 neurons per fish) (Fig. 1A). Neuronal avalanche statistics revealed power-law relationships and scale-invariant avalanche shape collapse which are consistent with crackling noise dynamics from a 3-dimensional random field Ising model [5] (Fig. 1B-C). Observed power laws were validated using shuffled surrogate data and log-likelihood ratio tests. This result provides the first evidence of criticality in the brain from large-scale in vivo neural activity at single cell resolution. Our findings demonstrate the potential of larval zebrafish as a model for the investigation of critical phenomena in the context of neurodevelopmental disorders that may perturb the brain away from criticality.

Fig. 1
figure27

A z-projection of 2-photon data from a larval zebrafish. B The spatio-temporal profile of a neuronal avalanche (clusters of neural activations that propagate through the brain, as shown by shaded regions). C Avalanche statistics reveal characteristic power law relationships including the distribution of avalanche size (shown) consistent with a random field Ising model (dashed line)

References

  1. 1.

    Cocchi L, Gollo LL, Zalesky A, Breakspear M. Criticality in the brain: A synthesis of neurobiology, models and cognition. Progress in Neurobiology. 2017; 158: 132–152.

  2. 2.

    Shew WL, Plenz D. The functional benefits of criticality in the cortex. Neuroscientist. 2013; 19(1): 88–100.

  3. 3.

    Ponce-Alvarez A, et al. Whole-brain neuronal activity displays crackling noise dynamics. Neuron. 2018; 100(6): 1446–1459.

  4. 4.

    Giovannucci A, et al. CaImAn an open source tool for scalable calcium imaging data analysis. Elife. 2019; 8: e38173.

  5. 5.

    Sethna JP, Dahmen KA, Myers CR. Crackling noise. Nature. 2001; 410(6825): 242-250.

P69 Population model of oscillatory dynamics in hippocampal CA1 and CA3 regions

Ashraya S Shiva, Bruce Graham

University of Stirling, Department of Computing Science and Mathematics, Stirling, United Kingdom

Correspondence: Ashraya S Shiva (asv@cs.stir.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P69

Theta oscillations may act as carrier waves for synchronizing activities across neuronal regions. Several gamma cycles, each containing a specific pattern of activity correlated with an event e.g. animal location forming place fields, are encompassed in a single theta cycle. We have extended the septo-hippocampal population firing rate model proposed by Denham and Borisyuk [1] to study the influence of inhibitory interneurons, specifically PV-containing basket cells (BCs) and bistratified cells (BSCs) on theta and theta-coupled gamma oscillations in both CA1 and CA3 hippocampal networks. Our CA1 microcircuit model is a combination of that of Denham and Borisyuk and Cutsuridis et. al [2]. The CA3 model is adapted from the CA1 model on the basis of CA3-specific experimental data.

The theta-phase relationships of the neurons in these models are largely determined by the afferent and efferent connections of BCs in both CA1 and CA3. In CA1, BCs and BSCs are active on opposite phases of theta in a pure-theta tuning (no gamma), as per data from Klausberger et. al [3,4] due to extra external drive to basket cells from CA3, enabling them to be active on the opposite cycle to CA1 pyramidal cells (PCs). As excitatory drive to BCs from PCs is increased there is a bifurcation in which BC activity switches to being in-phase with PCs, and BSC activity remains in-phase with PCs. A further increase in drive to BCs from PCs leads to gamma oscillations in both PC and BC activity, and forces BSC activity to shift to the opposite theta phase due to strong inhibition from BCs.

Varying strengths of external inputs also affects the strength of oscillations and phase relationships. Both in CA1 and CA3, PCs, BCs and BSCs reach steady state activity for very low or very high external inputs. BCs in CA1 rely on CA3 and EC input for their activity in theta-only tuning, so their activity reduces if these inputs are reduced. BSCs activity in CA1 actually reduces for increased CA3 and EC input, due to increased inhibition from BCs. In CA3, recurrent connections between PCs are far more likely (and hence stronger on a population scale) than in CA1. BSCs in CA3 are driven only by CA3 PCs and have no external sources of excitation. BCs in CA3 get dentate gyrus input in addition to PC input. Other interneurons (modeled as a single population) get excitation from CA1 and CA3 PCs, plus EC. In CA1, only CA3 and EC are external sources of input, apart from septum. The resultant main difference in activity in CA3, compared with CA1, is that BSCs are always in sync with BCs and PCs during theta. A minimum strength of septum input is required to generate theta, then increasing septum input raises the frequency of theta oscillations. Whereas, increasing the value of dentate gyrus input transforms the oscillatory activity into stable, non-oscillatory activity, and also decreases the septum activity. Strong EC and CA1 input to CA3 may silence CA3 PCs, BCs and BSCs through exciting other inhibitory interneurons.

References

  1. 1.

    Denham MJ, Borisyuk RM. A model of theta rhythm production in the septal‐hippocampal system and its modulation by ascending brain stem pathways. Hippocampus. 2000; 10(6): 698-716.

  2. 2.

    Cutsuridis V, Cobb S, Graham BP. Encoding and retrieval in a model of the hippocampal CA1 microcircuit. Hippocampus. 2010; 20(3): 423-46.

  3. 3.

    Klausberger T, et al. Brain-state-and cell-type-specific firing of hippocampal interneurons in vivo. Nature. 2003; 421(6925): 844-8.

  4. 4.

    Klausberger T, et al. Spike timing of dendrite-targeting bistratified cells during hippocampal network oscillations in vivo. Nature Neuroscience. 2004; 7(1): 41-7.

P70 Influence of anatomical connectivity and intrinsic dynamics in a connectome based neural mass model of TMS-evoked potentials

Neda Kaboodvand1, John Griffiths2

1Karolinska Institutet, Department of Clinical Neuroscience, Stockholm, Sweden; 2Centre for Addiction and Mental Health, Krembil Centre for Neuroinformatics, Toronto, Canada

Correspondence: John Griffiths (j.davidgriffiths@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P70

Perturbation via electromagnetic stimulation is a powerful way of probing neural systems to better understand their functional organization. One of the most widely used neurostimulation techniques in human neuroscience is transcranial magnetic stimulation (TMS) with concurrently recorded electroencephalography (EEG). The immediate EEG responses to single-pulse TMS stimulation, termed TMS-evoked potentials (TEPs), are spatiotemporal waveforms in EEG sensor- or source-space [1]. TEPs display several characteristic features, including i) rapid wave-like propagation away from the primary stimulation site, and ii) multiple volleys of recurrent activity, that continue for several hundred milliseconds following the stimulation pulse. These TEP patterns reflect reverberant activity in large-scale cortico-cortical and cortico-subcortical brain networks, and have been used to study neural excitability in a wide variety of research contexts, including sleep, anaesthesia, and coma [2]. There has been relatively little work done, however, on computational modelling of TEP waveform morphologies, and how these spatiotemporal patterns emerge from a combination of global brain network structure and local physiological characteristics. Here we present a novel connectome-based neural mass model of TEPs that accurately reproduces recordings across multiple subjects and stimulation sites. We employ a biophysical electric field model (using the simnibs [3] library) to identify the electrical field (‘E-field’) distribution over the cortical surface resulting from stimulation at a given TMS coil location and orientation, that is based on T1-weighted MRI-derived cortical geometry, and personalized to individual subjects. These TMS-induced E-field maps are then summed to yield a current injection pattern over regions in a canonical freesurfer-based brain parcellation. Whole-brain neural activity is modelled with a network of oscillatory (Fitzhugh-Nagumo) units [4,5], coupled by anatomical connectivity weights derived from diffusion-weighted MRI tractography [6], and perturbed by a brief square-wave current injection weighted regionally by the cortical E-field map magnitudes. Using this model we are able to accurately reproduce the typical radially propagating TEP patterns under a wide range of parameter values. For the later (150ms+) TEP components however, we find that it is necessary to modify the weight of cortico-thalamic and thalamo-cortical projections in the tractography-defined anatomical connectivity (see also [7]), which has the effect of promoting recurrent activity patterns. These results contribute important insights to our long-term objective of developing an accurate model of TEPs that can be used to guide the design and administration of TMS-EEG for excitability mapping in clinical contexts.

References

  1. 1.

    Ilmoniemi RJ, Kičić D. Methodology for combined TMS and EEG. Brain topography. 2010; 22(4): 233.

  2. 2.

    Massimini M, et al. Breakdown of cortical effective connectivity during sleep. Science. 2005; 309(5744): 2228-32.

  3. 3.

    Saturnino GB, et al. Simnibs 2.1: A comprehensive pipeline for individualized electric field modelling for transcranial brain stimulation. In Brain and Human Body Modeling 2019; 3-25. Springer, Cham.

  4. 4.

    Izhikevich EM, FitzHugh R. Fitzhugh-nagumo model. Scholarpedia. 2006; 1(9): 1349.

  5. 5.

    Spiegler A, Hansen EC, Bernard C, McIntosh AR, Jirsa VK. Selective activation of resting-state networks following focal stimulation in a connectome-based network model of the human brain. Eneuro. 2016; 3(5).

  6. 6.

    Schirner M, Rothmeier S, Jirsa VK, McIntosh AR, Ritter P. An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data. NeuroImage. 2015; 117: 343-57.

  7. 7.

    Bensaid S, Modolo J, Merlet I, Wendling F, Benquet P. COALIA: a computational model of human EEG for consciousness research. Frontiers in systems neuroscience. 2019; 13: 59.

P71 Local homeostatic regulation of the spectral radius of echo-state networks

Fabian Schubert1, Claudius Gros2

1Goethe University Frankfurt, Institute for Theoretical Physics, Frankfurt am Main, Germany; 2Goethe University Frankfurt, Frankfurt am Main, Germany

Correspondence: Fabian Schubert (fschubert@itp.uni-frankfurt.de)

BMC Neuroscience 2020, 21(Suppl 1):P71

Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which are based on the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present an intrinsic adaptation rule, termed flow control, for echo state networks that solely relies on locally accessible variables, while still being capable of tuning a global quantity, the spectral radius of the network, towards a desired value. The adaptation rule works online, in the presence of a continuous stream of input signals. It is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities, and requires the separability between external and recurrent input currents. The effectiveness of the presented mechanism is tested numerically using different external input protocols. Furthermore, the network performance after applying the adaptation is evaluated by training the network to perform a time delayed XOR operation on binary sequences.

P72 Comparison of surrogate techniques for evaluation of spatio-temporal patterns in massively parallel spike trains

Alessandra Stella1, Peter Bouss2, Günther Palm3, Sonja Gruen4

1Forschungszentrum Jülich, Jülich, Germany; 2Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6, INM-10, IAS-6), Jülich, Germany; 3University of Ulm, Institute of Neural Information Processing, Ulm, Germany; 4Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6 and INM-10), Jülich, Germany

Correspondence: Alessandra Stella (a.stella@fz-juelich.de)

BMC Neuroscience 2020, 21(Suppl 1):P72

To identify active cell assemblies we developed a method to detect significant spatio-temporal spike patterns (STPs). The method, called SPADE [1-3], identifies repeating ms-precise spike patterns across neurons. SPADE first discretizes the spike trains in exclusive bins (defining the pattern precision, e.g. 5ms) and clips the bin content to 1 if more than 1 spike is therein. Second, STPs are mined by Frequent Itemset Mining [4], and their counts are evaluated for significance through comparison to surrogate data. The distribution of the pattern counts in the surrogate data provides p-values for determining the significance of grouped patterns. The surrogate data implement the null-hypothesis of independence, and a classical choice is to apply uniform dithering (UD) [5], i.e. independent, uniformly distributed displacement of each spike (e.g. in a range of +/- 5 times the bin width [1]). This approach does not maintain the absolute refractory period and a potentially existing ISI regularity. The binarization leads in the surrogates to a higher probability of more than 1 spike per bin, and thus by the consecutive clipping to a reduction of the spike count (up to 12%, in particular for high firing rates) as compared to the original data.

This may cause false positives (cmp. [6]). Therefore, we explored further methods for surrogate generation. To not have different spike counts in the original and the surrogate data, bin-shuffling shuffles the bins after binning the original data. To keep the refractory period (RP) uniform dithering with refractory period (UD-RP) does not allow dithered spikes within a short time interval after each spike. Dithering according to the ISI distribution (ISI-D) [e.g. 5] or the Joint-ISI distribution (J-ISI-D) [7] conserves the ISI and ISI/J-ISI distributions, respectively. Spike-train shifting (ST-Shift) [8,5] moves the whole spike train, trial by trial, by a random amount, thereby only affecting the relation of spike trains to each other. Thus all of these implement different null-hypotheses, as summarized in the table below. It shows the non-/preservation (no/yes) of features in the various surrogates compared to the original data.

We applied all surrogate methods (within SPADE) and compared their results using artificial, and experimental spike data simultaneously recorded in pre-/motor cortex of a macaque monkey performing a reach-to-grasp task [9]. We find that all methods besides UD lead to very similar results in terms of number of patterns and their composition. UD results in a much larger number of patterns, in particular if neurons have very high firing rates and exhibit regular spike trains. We conclude that the reduction in the spike count using UD increases the false positive rate for spike trains with CV<1 and/or high firing rates, the other methods are much less affected, the least spike train shifting.

Table summarizing statistical properties of the spike trains being non-/preserved (no/yes) by the various surrogate techniques taken into consideration

Method/Feature UD ISI-D J-ISI-D UD-RP Bin-Shuff ST-shift
Spike count no yes approx. approx. yes yes
ISI no approx. approx. no no yes
J-ISI no no approx. no no yes

References

  1. 1.

    Torre E, et al. Synchronous spike patterns in macaque motor cortex during an instructed-delay reach-to-grasp task. Journal of Neuroscience. 2016; 36(32): 8329-40.

  2. 2.

    Quaglio P, Yegenoglu A, Torre E, Endres DM, Grün S. Detection and evaluation of spatio-temporal spike patterns in massively parallel spike train data with spade. Frontiers in Computational Neuroscience. 2017; 11: 41.

  3. 3.

    Stella A, Quaglio P, Torre E, Grün S. 3d-SPADE: Significance evaluation of spatio-temporal patterns of various temporal extents. Biosystems. 2019; 185: 104022.

  4. 4.

    Picado-Muiño D, Borgelt C, Berger D, Gerstein G, Grün S. Finding neural assemblies with frequent item set mining. Frontiers in Neuroinformatics. 2013; 7(9).

  5. 5.

    Louis S, Borgelt C, Grün S. Complexity distribution as a measure for assembly size and temporal precision. Neural Networks. 2010; 23(6): 705-12.

  6. 6.

    Pipa G, Grün S, Van Vreeswijk C. Impact of spike train autostructure on probability distribution of joint spike events. Neural Computation. 2013; 25(5): 1123-63.

  7. 7.

    Gerstein GL. Searching for significance in spatio-temporal firing patterns. Acta Neurobiologiae Experimentalis. 2004; 64(2): 203-8.

  8. 8.

    Pipa G, Wheeler DW, Singer W, Nikolić D. NeuroXidence: reliable and efficient analysis of an excess or deficiency of joint-spike events. Journal of computational neuroscience. 2008; 25(1): 64-88.

  9. 9.

    Brochier T, et al. Massively parallel recordings in macaque motor cortex during an instructed delayed reach-to-grasp task. Scientific data. 2018; 5(1): 1-23.

P73 Effects of transneuronal spreading in the logistic diffusion model of Aβ protein with longitudinal PET scans

Byeong Chang Jeong1, Myungwon Choi2, Daegyeom Kim2, Xue Chen3, Marcus Kaiser4, Cheol E Han2

1Korea University, Sejong, South Korea; 2Korea University, Department of Electronics and Information Engineering, Sejong, South Korea; 3China University of Petroleum, College of Control Science and Engineering, Qingdao, China; 4Newcastle University, School of Computing, Newcastle, United Kingdom

Correspondence: Byeong Chang Jeong (ni3_jbc@naver.com)

BMC Neuroscience 2020, 21(Suppl 1):P73

Alzheimer’s disease (AD) is a neuro-degenerative disease which causes severe loss of cognitive functions and deteriorates the quality of daily life of elders. One hypothesis of AD development and progression is deposition of amyloid-beta protein which is known to cause neuronal cell death in the gray matter and to degrade the cognitive function of the affected regions. Recently, it is reported that the toxic protein can move through connectivity between neurons, which is called transneuronal projection, besides the local diffusion through the non-neuronal tissues. In this study, we investigated the effects of the transneuronal projection on the amyloid deposition pattern over the brain through simulation of a mathematical model based on the actual neuroimage data. The model consists of two components: transneuronal spreading, and local spreading. The transneuronal spreading captures propagation of the toxic protein in the white matter while the local spreading captures its local diffusion through the gray matter. Each component has its own parameter to balance between components. We estimated all parameters in the model through the Bayesian inference method that best describe the longitudinal data from Alzheimer’s disease neuroimaging initiative (ADNI) dataset, by comparing the results of simulation data with the actual dataset. We modelled our brain as a high-resolution graph whose nodes are a small volume of the cerebral cortices, and whose edges are the structure of adjacency between them, delineating spreading pathways. We transformed the cerebral cortices into a triangular lattice of prism-like volumes from structural magnetic resonance (MR) images. From the topology of the lattice we extracted local adjacency between the nodes. On the contrary, for the long-range connection through the neuronal fibers, we obtained the connectivity between the nodes from the diffusion-weighted MR imaging (DWI). Each node has the level of the amyloid deposition, obtained through the18F-Florbetapir positron emission tomography (PET) images with partial volume effect correction. This high-resolution graph model can enable to simulate more granularly and accurately. To investigate the effect of transneuronal spreading, we compared results of two conditions: simulation 1) with only the local spreading, and 2) with both the local and transneuronal spreading. Both models showed the spread of the amyloid from the regions where the initial protein accumulation is high, also known as epicenters. The result of the former condition may explain gradual spread from the epicenters to their nearby regions while it could not explain remote spread from the epicenters to their distant regions. On the other hand, the latter results illustrated not only local spread, but also remote spread. Thus, the model with both local and transneuronal spreading components is more feasible to explain the deposition of amyloid-beta in AD. The purpose of this study is to investigate the effect of transneuronal spread of the amyloid beta over AD’s progression using the actual neuroimage data. Our results support the previous research of transmission of amyloid through the neuronal pathways.

Acknowledgements: This work was supported by the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI) that was funded by the Ministry of Health & Welfare, Republic of Korea (HI19C0645), and Medical Research Council, UK (MR/T004347/1).

P74 Implications of reduced somatostatin interneuron inhibition in depression on human cortical microcircuit activity

Heng Kang Yao1, Alexandre Guet-McCreight2, Etay Hay3

1University of Toronto, Physiology, Toronto, Canada; 2Centre for Addiction and Mental Health, Krembil Centre for Neuroinformatics, Toronto, Canada; 3Centre for Addiction and Mental Health/ University of Toronto, Krembil Centre for Neuroinformatics/ Psychiatry, Toronto, Canada

Correspondence: Heng Kang Yao (kant.yao@mail.utoronto.ca)

BMC Neuroscience 2020, 21(Suppl 1):P74

There is increasing evidence of reduced cortical inhibition in a variety of psychiatric disorders such as major depressive disorder (MDD) and schizophrenia. Cortical inhibition is mediated by GABAergic interneurons and plays a key role in modulating information processing in cortical pyramidal neurons. In particular, interneurons expressing somatostatin (SST) inhibit the distal dendrites of pyramidal neurons and mediate lateral inhibition. Recent postmortem studies showed reduced SST expression in these interneurons in MDD patients, suggesting weaker levels of inhibition. The reduced inhibition is thought to result in a lower signal-to-noise ratio of cortical microcircuit activity, due to abnormally increased intrinsic activity compared to stimulus-evoked activity. We test this hypothesis and characterize the implications of reduced SST inhibition in depression using novel computational models of human cortical microcircuits. We generated detailed models of the major neuronal types in human cortical layer 2/3, by integrating unique human electrophysiology data and applying machine-learning optimization algorithms. We connected the model neurons in a microcircuit according to connectivity statistics and synaptic parameters derived from the literature. We show that intrinsic activity significantly increases in models of MDD microcircuits, in which SST interneuron inhibition was reduced, compared to healthy microcircuits. We then then compared the signal-to-noise ratio of intrinsic and evoked activity in healthy and MDD microcircuits. Our results thus elucidate the role that inhibition plays in normal and pathological information processing by human cortical microcircuits.

P75 The covariance perceptron: theory and application

Matthieu Gilson1, David Dahmen2, Ruben Moreno-Bote3, Andrea Insabato4, Moritz Helias5

1Universitat Pompeu Fabra, Center for Brain and Cognition, Barcelona, Spain; 2Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Jülich, Germany; 3Universitat Pompeu Fabra, Center for Brain and Cognition & DTIC, Barcelona, Spain; 4Universty Pompeu Fabra, Barcelona, Spain; 5Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6, INM-10) and Institute for Advanced Simulation (IAS-6), Jülich, Germany

Correspondence: Matthieu Gilson (matthieu.gilson@upf.edu)

BMC Neuroscience 2020, 21(Suppl 1):P75

Introduction: Many efforts in the study of the brain have focused on representations of stimuli by neurons and learning thereof. Our work [1] demonstrates the potential of a novel learning paradigm for neuronal activity with high variability, where distributed information is embedded in the correlation patterns.

Learning theory: We derive a learning rule to train a network to perform an arbitrary operation on spatio-temporal covariances for time series. To illustrate our scheme, we use the example of classification where the network is trained to perform an input-output mapping from given sets of input patterns to representative output patterns, one output per input group. This setup is the same as learning activity patterns for the classical perceptron [2], a central concept that has brought many fruitful theories in the fields of neural coding and learning in networks. For that reason, we refer to our classifier as “covariance perceptron”. Compared to the classical perceptron, a conceptual difference is that we base information on the the co-fluctuations of the input time series that result in second-order statistics. In this way, robust information can be conveyed despite a high apparent variability in the activity. This approach is a radical change of perspective compared to classical approaches that typically transform time series into a succession of static patterns where fluctuations are noise. On the technical ground, our theory relies on the multivariate autoregressive (MAR) dynamics, for which we derive the weight update (a gradient descent) such that input covariance patterns are mapped to given objective output covariance patterns.

Application to MNIST database: To further explore its robustness, we apply the covariance perceptron to the recognition of objects that move in the visual field by a network of sensory (input) and downstream (output) neurons. We use the MNIST database of handwritten digits 0 to 4. As illustrated in Figure 1, the traces “viewed” by an input neuron exhibit large variability across presentations. Because we want to identify both the digit identity and its moving direction, covariances of the input time series are necessary. We show that the proposed learning rule can successfully train the network to perform the classification task and robustly generalize to unseen data. In our work [1], we also show that the covariance perceptron favorably compares to the classical nonlinear perceptron in extracting second-order statistics.

Fig. 1
figure28

A Moving digit in the visual field with two columns of 9 input neurons each feeding 10 output neurons (one per category, the largest output variance indicates the predicted category). B Responses of the input to the digit 0 in panel A moving to the right. C Mean activity traces for an input neuron for digits 0 and 2, moving left or right as indicated above. The colored areas correspond to the standard deviation of the time-varying activity over all patterns. D The information relative to both digit and motion is reflected in the input covariances. E Confusion matrices of the covariance perceptron for the train and test sets for digits 0 to 4 moving left and right

Towards distributed spike-based information processing: We envisage future steps that transpose this work to information conveyed by high-orders in the spike trains, to obtain the supervised equivalent of spike-timing-dependent plasticity (STDP).

References

  1. 1.

    Gilson M, Dahmen D, Moreno-Bote R, Insabato A, Helias M. The covariance perceptron: A new framework for classification and processing of time series in recurrent neural networks. bioRxiv. 2019; 562546.

  2. 2.

    Bishop CM. Pattern recognition and machine learning. springer; 2006.

P76 Inferring a simple mechanism for alpha-blocking by fitting a neural population model to EEG spectra

Agus Hartoyo1, Peter Cadusch2, David Liley3, Damien Hicks4

1Swinburne University of Technology, Optical Sciences Centre, Melbourne, Australia; 2Swinburne University of Technology, Department of Physics and Astronomy, Melbourne, Australia; 3Swinburne University of Technology, Centre for Human Psychopharmacology, Melbourne, Australia; 4Centre for Human Psychopharmacology, Optical Sciences Centre, Melbourne, Australia

Correspondence: Agus Hartoyo (ahartoyo@swin.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P76

Alpha blocking, a phenomenon where the alpha rhythm is reduced by attention to a visual, auditory, tactile or cognitive stimulus, is one of the most prominent features of human electroencephalography (EEG) signals. Here we identify a simple physiological mechanism by which opening of the eyes causes attenuation of the alpha rhythm. We fit a neural population model to EEG spectra from 82 subjects, each showing different degrees of alpha blocking upon opening of their eyes. Although it is notoriously difficult to estimate parameters from fitting such models, we show that, by regularizing the differences in parameter estimates between eyes-closed and eyes-open states, we can reduce the uncertainties in these differences without significantly compromising fit quality. From this emerges a parsimonious explanation for the spectral changes between states: just a single parameter, pei, corresponding to the strength of a tonic, excitatory input to the inhibitory population, is sufficient to explain the reduction in alpha rhythm upon opening of the eyes. When comparing parameter estimates across different subjects we find that the inferred differential change in pei for each subject increases monotonically with the degree of alpha blocking observed. In contrast, other parameters show weak or negligible differential changes that do not scale with the degree of alpha attenuation in each subject. Thus, most of the variation in alpha blocking across subjects can be attributed to the strength of a tonic afferent signal to the inhibitory cortical population.

P77 Tonic GABAergic inhibition enhances activity-dependent dendritic calcium signaling

Thomas M Morse1, Chiayu Q Chiu2, Francesca Nani3, Frederic Knoflach3, Maria-Clemencia Hernandez3, Monika Jadi4, Michael J Higley1

1Yale University, Department of Neuroscience, Kavli Institute of Neuroscience, New Haven, Connecticut, United States of America; 2Universidad de Valparaiso, Centro Interdisciplinario de Neurociencia de Valparaiso, Valparaiso, Chile; 3Roche Innovation Center, Roche Pharmaceutical Research and Early Development, Neuroscience and Rare Diseases, Basel, Switzerland; 4Yale University, Department of Psychiatry, New Haven, Connecticut, United States of America

Correspondence: Thomas M Morse (tom.morse@yale.edu)

BMC Neuroscience 2020, 21(Suppl 1):P77

Brain activity is highly regulated by GABAergic activity, which acts via GABA type A receptors (GABAARs) to suppress somatic spike generation as well as dendritic synaptic integration and calcium signaling. Tonic GABAergic conductances mediated by distinct receptor subtypes can also inhibit neuronal excitability and spike output, though the consequences for dendritic calcium signaling are unclear. Here, we use 2-photon calcium imaging in cortical pyramidal neurons and computational modeling to show that low affinity GABAARs containing an alpha5 subunit mediate a tonic hyperpolarization of the dendritic membrane potential, resulting in deinactivation of voltage-gated calcium channels and a paradoxical boosting of action potential-evoked calcium influx. We also find that GABAergic enhancement of calcium signaling modulates short-term synaptic plasticity, augmenting depolarization-induced suppression of inhibition. These results demonstrate a novel role for GABA in the control of dendritic activity and suggest a mechanism for differential modulation of electrical and biochemical signaling.

Acknowledgements: The authors wish to thank members of the Higley laboratory and Dr. Jessica A. Cardin for helpful comments, the Yale Center for Research Computing for support with the Yale Farnam high performance cluster, Henner Knust for compound synthesis, Chiristian Miscenic and Marcello Foggetta for cell transfections and membrane preparations, Judith Lengyel, Gregoire Friz, and Maria Karg for cell line generation and radioligand binding assays, and Marie Claire Pflimlin for support with electrophysiological characterization of Compound A selectivity. This work was supported by funding from the NIH/NIMH (R01 MH099045 and MD113852 to MJH, K01 MH097961 to CQC), funding agencies in Chile (FONDECYT No. 1171840 and MILENIO PROYECTO P09-022-F, CINV to CQC), and Roche Pharmaceutical.

P78 Phasic response of epileptic models

Alberto P Cervera, Jaroslav Hlinka

Institute of Computer Science - Czech Academy of Sciences, Prague, Czechia

Correspondence: Alberto P Cervera (perez@cs.cas.cz)

BMC Neuroscience 2020, 21(Suppl 1):P78

Although epilepsy is the most chronic neurological disorder, the mechanisms underlying the initiation of epileptic seizure remain unknown. Epileptic seizures are generated by intense activity emerging from a highly synchronized neuronal population. These phenomena are usually preceded and followed by intervals of reduced activity, known as interictal periods. Importantly, the transient neuronal activity during these interictal periods -known as interictal epileptiform discharges (IEDs) - is considered a key mechanism governing the transition to seizure. However, whether IEDs prevent or facilitate that transition is still a matter of debate.

In this work, based on previous findings in [1], we show how these dual effects for IEDs can be interpreted in terms of the phasic response of a slow-fast system. Indeed, since the phase response of a given system follows from its isochrons distribution, we perform a theoretical and computational study of the isochrons and phase response curves of different planar slow-fast epileptic models. Our results unfold the strong influence of the slow vector field in the phasic response of the system to IEDs and suggest theoretical strategies whose effects range from the short delay to the full suppression of seizures.

Reference

  1. 1.

    Chang WC, et al. Loss of neuronal network resilience precedes seizures and determines the ictogenic nature of interictal synaptic perturbations. Nature Neuroscience. 2018; 21(12): 1742-52.

P79 Frequency-dependent synaptic gain in a computational model of mouse thoracic sympathetic postganglionic neurons

Astrid Prinz1, Michael McKinnon1, Kun Tian1, Shawn Hochman2

1Emory University, Department of Biology, Atlanta, Georgia, United States of America; 2Emory University, Department of Physiology, Atlanta, Georgia, United States of America

Correspondence: Astrid Prinz (astrid.prinz@emory.edu)

BMC Neuroscience 2020, 21(Suppl 1):P79

Postganglionic neurons in the thoracic sympathetic chain represent the final common output of the sympathetic nervous system. These neurons receive synaptic inputs exclusively from preganglionic neurons located in the spinal cord. Synaptic inputs come in two varieties: primary inputs, which are invariably suprathreshold, and secondary inputs, which exhibit a range of typically subthreshold amplitudes. Postganglionic neurons typically receive a single primary input and a variable number of secondary inputs in what has been described as an “n+1” connectivity pattern. Secondary inputs have often been viewed as inconsequential to cell recruitment due to the short duration of measured synaptic inputs and the relatively low tonic firing rate of preganglionic neurons in vivo. However, recent whole-cell patch clamp recordings reveal that thoracic postganglionic neurons have a greater capacity for synaptic integration than previous microelectrode recordings would suggest. This supports a greater role for secondary synapses in cell recruitment.

We previously created a conductance-based computational model of mouse thoracic postganglionic neurons. In the present study, we have expanded the single-cell model into a network model with synaptic inputs based on whole-cell recordings. We systematically varied the average firing rate of a network of stochastically firing preganglionic neurons and measured the resultant firing rate in simulated postganglionic neurons. Synaptic gain was defined as the ratio of postganglionic to preganglionic firing rate.

We found that for a network configuration that mimics the typical arrangement in mouse, low presynaptic firing rates (<0.1Hz) resulted a synaptic gain close to 1, while firing rates closer to 1Hz resulted in a synaptic gain of 2.5.Synaptic gain diminished for firing rates higher than ~3Hz (Fig. 1). We also determined that synaptic gain linearly increases with the number of secondary synaptic inputs (n) within the range of physiologically realistic presynaptic firing rate. Amplitude of secondary inputs also determines frequency-dependent synaptic gain, with a bifurcation where secondary synaptic amplitude equals recruitment threshold. We further demonstrate that the synaptic gain phenomenon depends on the preservation of passive membrane properties as determined by whole-cell recordings.

Fig. 1
figure29

Effect of secondary inputs on synaptic gain. A Firing rate of postganglionic neurons as a function of presynaptic firing rate. Each simulation includes a single primary input, and n secondary synaptic inputs. Dashed line is the line of unity. B Synaptic gain as a function of presynaptic firing rate for postganglionic neurons with different numbers of secondary inputs

One major biological role of the sympathetic nervous system is the regulation of vascular tone in both skeletal muscle and cutaneous structures. The firing rate of muscle vasoconstrictor preganglionic neurons is modulated by the cardiac cycle, while cutaneous vasoconstrictor neurons fire independently of the cardiac cycle. We modulated preganglionic firing rate according to the typical mouse heart rate to determine if cardiac rhythmicity changes the overall firing rate of postganglionic neurons. Cardiac rhythmicity does not appear to have a significant impact on synaptic gain within the physiological range of preganglionic input.

Under normal physiological conditions, the unity gain of sympathetic neurons would lead to faithful transmission of central signals to peripheral targets. However, during episodes of high sympathetic activation, the postganglionic network can amplify central signals in a frequency-dependent manner. These results suggest that postganglionic neurons play a more active role in shaping sympathetic activity than previously thought.

P80 Comp-NeuroFedora, a Free/Open Source operating system for computational neuroscience: download, install, research

Ankur Sinha1, Aniket Pradhan2, Qianqian Fang2, Danny Lee2, Danishka Navin2, Alberto R Sanchez2, Luis Bazan2, Luis M Segundo2, Alessio Ciregia2, Zbigniew J˛edrzejewski-Szmek2, Sergio Pascual2, Antonio Trande2, Victor M T Yau2, Morgan Hough2

1University of Hertfordshire, Biocomputation Research Group, Hatfield, United Kingdom; 2Fedora Project

Correspondence: Ankur Sinha (a.sinha2@herts.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P80

The promotion and establishment of Open Neuroscience [1] is heavily dependent on the availability of Free/Open Source Software (FOSS) tools that support the modern scientific process. While more and more tools are now being developed using FOSS driven methods to ensure free (as in freedom, and thus also free of cost) access to all, the complexity of these domain specific tools tends to hamper their uptake by the target audience – scientists hailing from multiple, sometimes non-computing, disciplines. The NeuroFedora initiative aims to shrink the chasm between the development of neuroscience tools and their usage [2].

Using the resources of the FOSS Fedora community [3] to implement current best practices in software development, NeuroFedora volunteers identify, package, test, document, and disseminate neuroscience software for easy usage on the general purpose Fedora Linux Operating System (OS). The result is the reduction of the installation/deployment process for this software to a simple two-step process: install any flavour of the Fedora OS; install the required tools using the in-built package manager.

To make common computational neuroscience tools even more accessible, NeuroFedora now provides an OS image that is ready to download and use. In addition to a plethora of computational neuroscience software -Auryn [4], NEST [5], Brian [6], NEURON [7], GENESIS [8], Moose [9], Neurord [10], and others - the image also includes various utilities that are commonly used along with modelling tools, such as the complete Python science stack. Further, since this image is derived from the popular Fedora Workstation OS, it includes the modern GNOME integrated application suite and retains access to thousands of scientific, development, utility, and other daily use tools from the Fedora repositories.

A complete list of available software can be found at the NeuroFedora documentation at neuro.fedoraproject.org. We invite students, trainees, teachers, researchers, and hobbyists to use Comp-NeuroFedora in their work and provide feedback. As a purely volunteer driven initiative, in the spirit of the Open Science and FOSS, we welcome everyone to participate, engage, learn, and contribute in whatever capacity they wish.

References

  1. 1.

    Gleeson P, Davison AP, Silver RA, Ascoli GA. A commitment to open source in neuroscience. Neuron. 2017; 96(5): 964-5.

  2. 2.

    Sinha A, et al. NeuroFedora: a ready to use Free/Open Source platform for Neuroscientists. BMC Neuroscience 20; 1471-2202 at: https://neuro.fedoraproject.org.

  3. 3.

    Hat R. Fedora Project.

  4. 4.

    Zenke F, Gerstner W. Limits to high-speed simulations of spiking neural networks using general-purpose computers. Frontiers in Neuroinformatics. 2014; 8.

  5. 5.

    Linssen C, et al. NEST 2.16. 0. Jülich Supercomputing Center; 2018.

  6. 6.

    Goodman DF, Brette R. The brian simulator. Frontiers in Neuroscience. 2009; 3: 26.

  7. 7.

    Hines ML, Carnevale NT. The NEURON simulation environment. Neural computation. 1997; 9(6): 1179-209.

  8. 8.

    Bower JM, Beeman D, Hucka M. The GENESIS simulation system. 2003.

  9. 9.

    Dudani N, Ray S, George S, Bhalla US. Multiscale modeling and interoperability in MOOSE. BMC Neuroscience. 2009; 10(1): 1-2.

  10. 10.

    Jȩdrzejewski-Szmek Z, Blackwell KT. Asynchronous τ-leaping. The Journal of Chemical Physics. 2016; 144(12): 125104.

P81 A computational neural model of pattern motion selectivity of MT neurons

Parvin Zarei Eskikand1, David Grayden2, Tatiana Kameneva3, Anthony Burkitt2, Michael Ibbotson1

1University of Melbourne, Melbourne, Australia; 2University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia; 3Swinburne University of Technology, Telecommunication Electrical Robotics and Biomedical Engineering, Melbourne, Australia

Correspondence: Parvin Zarei Eskikand (pzarei@unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P81

The middle temporal area (MT) within the extrastriate primate visual cortex contains a high proportion of direction-selective neurons. When the visual system is stimulated with plaid patterns, a range of cell-specific MT responses are observed. MT neurons that are selective to the direction of the pattern motion are called “pattern cells”, while those that respond optimally to the motion of the individual component gratings of the plaid pattern are called “component cells”. The current theory on the generation of pattern selectivity of MT neurons is based on a hierarchical relationship between component and pattern MT neurons, where the responses of pattern MT neurons result from the summation of the responses of component MT neurons [1]. Where the gratings cross in plaids, the crossing junctions of the gratings move in the pattern direction. However, revealing the ends of the moving gratings (terminators) in human perceptual experiments breaks the illusion of the direction of pattern motion: the true directions of motion of the gratings are perceived.

Here, we propose a biologically plausible model of MT neurons that uses as inputs the known properties of three types of cells in the primary visual cortex (V1): complex V1 neurons, end-stopped V1 neurons (which only respond to the end-points of the stimulus), and V1 neurons with suppressive extra-classical receptive fields. The receptive fields of the neurons are modelled as spatiotemporal filters. There are two types of MT neurons: integration MT neurons with facilitatory surrounds and segmentation MT neurons with antagonistic surrounds [2]. A neuron’s pattern or component selectivity is controlled by the relative proportions of the inputs from the three types of V1 neurons. The model provides a simple mechanism by which component and pattern selective cells can be described; the model does not require a hierarchical relationship between component and pattern MT cells.

The results show that the responses of the model MT neurons are highly dependent on two parameters: the excitatory input that the model neurons receive from the complex V1 neurons with extra-classical RFs and the inhibitory effect of the end-stopped neurons. The results also show experimentally observed contrast dependency of the pattern motion preference of MT neurons: the level of the pattern selectivity of MT neurons drops significantly when the contrast of the bars is reduced.

The presented model solves several problems associated with MT motion detection, such as overcoming the aperture problem and extracting the correct motion directions from crossing bars. Apart from the mechanism of the computation of the pattern motion by MT neurons, the model inherently explains several important properties of pattern MT neurons, including their temporal dynamics, the contrast dependency of pattern selectivity, and the spatial and temporal limits of pattern motion detection.

References

  1. 1.

    Kumbhani RD, El-Shamayleh Y, Movshon JA. Temporal and spatial limits of pattern motion sensitivity in macaque MT neurons. Journal of Neurophysiology. 2015; 113(7): 1977-88.

  2. 2.

    Zarei Eskikand P, Kameneva T, Burkitt AN, Grayden DB, Ibbotson MR. Pattern motion processing by MT neurons. Frontiers in neural circuits. 2019; 13: 43.

P82 Contrast invariant tuning in primary visual cortex

Hamish Meffin1, Ali Almasi1, Michael R Ibbotson2

1National Vision Research Institute, Carlton, Australia; 2Australian College of Optometry, The National Vision Research Institute, Carlton, Australia

Correspondence: Hamish Meffin (hmeffin@yahoo.com)

BMC Neuroscience 2020, 21(Suppl 1):P82

Previous studies show that neurons in primary visual cortex (V1) exhibit contrast invariant tuning to the orientation of spatial grating stimuli [1]. Mathematically this is equivalent to saying that their response is a multiplicatively separable function of contrast and orientation.

Here we investigated the contrast dependence of V1 tuning to visual features in a more general framework. We used a data-driven modelling approach [2] to identify the spectrum of spatial features to which individual V1 neurons were sensitive, from our recordings of single unit responses in V1 to white (Gaussian) noise and natural scenes. For each cell we identified between 1 and 5 spatial feature dimensions to which the cell was sensitive (e.g. Fig. 1A, with 2 feature dimensions; feature 1 & 2 as labelled, with red showing bright and blue showing dark regions of the feature). The response of a neuron to its set of features was estimated from the data as the spike rate equal to a function of the individual feature-contrasts: r = F(c1,…,cK) (Eq. 1) where c1,…,cK are the contrast levels of a cell’s spatial features, 1,..K, embedded in any stimulus (e.g. Fig. 1B).These features spanned a subspace, giving a spectrum of interpolated features to which the cell was sensitive (Fig.1A, examples labelled). The identity of these features varied along the angular polar coordinate in this subspace, which we term the feature-phase,φ (Fig. 1A, labelled). In this angular dimension, characteristics of the features, such as their spatial phase, orientation or spatial frequency, were found to vary continuously. In the radial coordinate, the contrast of these features varied, c = (c1,…,cK) (Fig. 1A, labelled).

We found that the neural response above the spontaneous rate, r0, was well approximated by a multiplicatively separable function of the feature-contrast and feature-phase (Fig. 1C): r = fc(c) fφ(φ) + r0 (Eq.2).

To quantify the accuracy of this approximation, we calculated a relative error between the original and separable forms of the feature-contrast response function (i.e. Eq. (1) & (2)). This relative error varied between 2% and 18% across the cell population, with a mean of 6%. This indicates that for most cells, the separable form of the feature-contrast response function was a good approximation.

This result may be interpreted as demonstrating a form of contrast invariant tuning to feature-phase in V1. This tuning to feature-phase is given by the function fφ(φ) (Fig. 1E), and the contrast response function is given by fc(c) (Fig. 1D). As several feature characteristics such as spatial phase, orientation or spatial frequency covary with feature-phase, this also leads to contrast invariant tuning under covariation in these characteristics as feature-phase varies.

Acknowledgements: The authors acknowledge the support the Australian Research Council Centre of Excellence for Integrative Brain function (CE140100007), the National Health and Medical Research Council (GNT1106390), and Lions Club of Victoria.

Fig. 1
figure30

A Feature subspace of a cell. B Original feature-contrast response function. C Separable approximation of the feature-contrast response function. D Separable contrast response function. E Separable feature-phase tuning function

References

  1. 1.

    Alitto HJ, Usrey WM. Influence of contrast on orientation and temporal frequency tuning in ferret primary visual cortex. Journal of neurophysiology. 2004; 91(6): 2797-808.

  2. 2.

    Almasi A, et al. Mechanisms of feature selectivity and invariance in primary visual cortex. bioRxiv 2020.

P83 One-shot model to learn 7 quantal parameters simultaneously, including desensitization

Emina Ibrahimovic

University of Zurich and Elvesys, Zurich, Switzerland

Correspondence: Emina Ibrahimovic (emina.ibrahimovic@imls.uzh.ch)

BMC Neuroscience 2020, 21(Suppl 1):P83

Synaptic transmission is the hallmark behind informative and adaptive cognitive processes. Neurons store and enable retrieval of information by connecting via synapses. They can modulate the signal either to maintain stability or decrease or increase their strength. To improve quantification of synaptic dynamics with focus on computational efficiency and number of parameters, a fast variant of the binomial model is applied to estimate θ={N,p,q,σ,τdfdes}.

There is up to date no efficient method to simultaneously quantify many synaptic parameters from physiological recordings. Generative models have heavy computational loads as they depend on the number of release sites N [1], and need to compute multiple iterations. Empirical methods usually require specific conditions, such as changing the release probability p or requiring regular inter spike intervals, which makes them unsuitable for fitting physiological data. The fast algorithm proposed here is based on non-linear least squares to retrieve quantal parameters. We apply this model to spontaneous excitatory postsynaptic potentials (EPPs) patch-clamped at an in vitro mouse neuromuscular junction (NMJ) taken in [2] (Fig. 1A). Results in control are relatively in line with estimations found in vivo [3] (Fig. 1C). Estimated quantal parameters at control are subsequently compared with data undergoing curare poison (Fig. 1B). The addition of curare shows a drop in quantal size q as expected. Other parameters remain stable suggesting no homeostatic compensatory mechanisms in this case. The model is validated on synthetic data and compared with an iterative expectation-maximization technique. The supremacy of this one-shot method allows to retrieve many parameters without multiple batches. In contrast to [1], its temporal complexity is independent of the number of release sites. This method is adaptable to general synaptic recording, which makes it less constrained. It is also the first phenomenological model that incorporates desensitization τdes. The framework gives insights on synaptic weights and their long-short dynamics. It permits to correlate between subjective and objective variabilities in healthy and pathological cases to molecular functions.

Fig. 1
figure31

In vitro mouse NMJ EPPs with model reconstruction (blue). A Control: number of release sites N=100, release probability p=0.4, quantal size q=0.13mV, background noise σ=0.2mV, depression time constant τd=95ms, refilling time constant τf=50ms, desensitization τdes= 30ms. B Curare administration: N=100, p=0.5, q=0.05mV, σ=0.1mV, τd=45ms, τf=50ms, τdes=30ms. C Cartoon of a synaptic junction. Comparison of estimation of parameters with rat NMJ can be found in [3]

References

  1. 1.

    Barri A, Wang Y, Hansel D, Mongillo G. Quantifying repetitive transmission at chemical synapses: a generative-model approach. eNeuro. 2016; 3(2).

  2. 2.

    Vilmont V, Cadot B, Ouanounou G, Gomes ER. A system for studying mechanisms of neuromuscular junction development and maintenance. Development. 2016; 143(13): 2464-77.

  3. 3.

    Wilson DF. Estimates of quantal-release and binomial statistical-release parameters at rat neuromuscular junction. American Journal of Physiology-Cell Physiology. 1977; 233(5): C157-63.

P84 Brainpower – investigating the role of power constraints in neural network plasticity

Silviu Ungureanu1, Mark van Rossum2

1University of Nottingham, School of Psychology, Nottingham, United Kingdom; 2University of Nottingham, School of Psychology and School of Mathematical Sciences, Nottingham, United Kingdom

Correspondence: Silviu Ungureanu (silviu.ungureanu@nottingham.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P84

The central nervous system consumes approximately 20W of metabolic power in humans [1]. This is used for neural communication, and also for neural plasticity and the formation of new memories. Persistent forms of plasticity in particular consume so much energy that under sudden food scarcity, associative learning significantly reduces lifespan of fruit flies [2].

It is reasonable therefore that neural plasticity has evolved to learn at minimal power. However, how this changes plasticity and learning is not known. While previous work has considered an energy constraint [3], a power constraint might be more biological as, unlike many other tissues, the brain cannot store energy. A power constraint might be able to explain why plasticity induction requires a refractory time before it can be induced again [4], as well as spatial competition in plasticity between synapses and neurons [5,6].

Here, we developed a computational model of plasticity to examine the effect of a power constraint on plasticity dynamics. We first use a standard perceptron augmented with two types of synaptic weights: an inexpensive transient, decaying component and a costly long-term component, formed by the simultaneous consolidation of all the transient weights. We further assume that the brain attempts to consolidate new memories as soon as it is able to. Hence, the interval between consolidation events is limited and synaptic consolidation events occur at a fixed frequency, representing the refractory period caused by a dearth of energy. Higher consolidation frequencies correspond to more available power, and vice-versa. The perceptron is trained on a random-generated set of binary patterns until it correctly learns the output value for each pattern.

Results show that the power in the system has a significant impact on the training time. Unexpectedly, increasing the period between consolidations - thus reducing power - can reduce the required number of epochs by as much as 30%, depending on the strength of the weight decay, the number of patterns P in the training set, and the number of synapses N. Further increasing the period between consolidations increases the training time. This increase occurs not gradually, but in a staircase pattern, peaking whenever the period is 0 modulo P (Fig. 1).

Fig. 1
figure32

A perceptron with 500 synapses trained on 800 patterns. In blue, number of epochs necessary for the perceptron to learn the entire training set, as a function of the interval between consolidation events. In red, the number of epochs needed by a standard perceptron

The consequences of a power constraint are further explored in a multi-layer neural network and extended to a probabilistic model where the probability of consolidation increases proportional to the time since the previous consolidation event.

In summary, our results show that incorporation of a metabolic power constraint in synaptic plasticity can lead to important changes in the learning dynamics.

References

  1. 1.

    Attwell D, Laughlin SB. An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow & Metabolism. 2001; 21(10): 1133-45.

  2. 2.

    Mery F, Kawecki TJ. A cost of long-term memory in Drosophila. Science. 2005; 308(5725): 1148-.

  3. 3.

    Li HL, Van Rossum MC. Energy efficient synaptic plasticity. Elife. 2020; e50804.

  4. 4.

    Kramár EA, et al. Synaptic evidence for the efficacy of spaced learning. Proceedings of the National Academy of Sciences. 2012; 109(13): 5121-6.

  5. 5.

    Sajikumar S, Morris RG, Korte M. Competition between recently potentiated synaptic inputs reveals a winner-take-all phase of synaptic tagging and capture. PNAS 2014; 111: 12217.

  6. 6.

    Josselyn SA, Tonegawa S. Memory engrams: Recalling the past and imagining the future. Science. 2020; 367: 6473.

P85 The Neuron growth and death model and simulator

Yuko Ishiwaka, Tomohiro Yoshida, Tadateru Itoh

SoftBank Corp., Technology Unit, Hakodate, Japan

Correspondence: Yuko Ishiwaka (yuko.ishiwaka@g.softbank.co.jp)

BMC Neuroscience 2020, 21(Suppl 1):P85

Some kinds of characteristics of neurons depend on morphology. There are many neuron types in a brain and the functions of each cell type are varied. For example, auditory cells that receive sound stimulus from the external world and pyramidal cells that relate to thinking and memory have different functions. A bushy cell which is one of the sensory neurons of auditory treats tempo of sounds, therefore the immediate responses are required for producing action potential and short time refractory period. On the other hand, pyramidal cells that mainly exist in the hippocampus and amygdala treat memory and emotion, therefore the producing action potential is slower than sensor neurons and the refractory period is longer. Hodgkin and Huxley (H-H) equations can calculate action potentials based on ion channels. H-H does not consider morphology, however, on actual cell membranes, the number of exiting ion channels and locations are based on shapes of neurons. How quick or slow soma can produce action potentials are depend on how narrow and how many ion channels on the producing area. Therefore, we assume that there are strong relationships between cell shapes and characteristics of action potentials. Expanded H-H equations can adapt to the quickness of producing action potential by adding axon hillock parameters.

Connectivity between neurons is also important. Geometry varies according to cell types. Purkinje cells which are one of the inhibitory neurons have complex branches of the dendritic arbor. On the other hand, Pyramidal cells which are one of the excitatory neurons and multipolar type neurons have one axon and many dendrites, but the complexity of geometry is simpler than Purkinje cells. These differentials of geometry cause differences in connectivity.

In this paper, we propose a new neuron growth and deal model and simulator considered neuron morphology and connectivity between multi cell types. In our model, a characteristic of a growth cone is applied to neuron growth and treated as a navigation system, an L-system is adapted for creating the geometry of each neuron, and Life game is embedded for a cell division rule.

We also adopt glial cells for neuron growth, not only stimulus from other neurons. In our model, each neuron receives the energy for growing from contacted astrocytes which are one of the glial cells. The direction of growth of the growth cones has determined by set goal areas for far, and during growing, growth cones try to contract near oligodendrocytes to obtain myelin around their axons. A cell division rule for Oligodendrocytes follows life game rules. The glial cells are treated as obstacles.

In our simulation system, a user can create various types of neurons, set the goals for both dendrites and axons, create connections between various functions and geometries of neurons with growth rules and add injections such as inhibitory postsynaptic potential (IPSP) or excitatory postsynaptic potential (EPSP) on purpose to calculate action potentials.

In conclusion, Fig. 1 shows simulation results of our proposed model. In our simulator, variety of geometry can be produced automatically based on expanded L-system, variety and flexible connectivity can be also produced based on our proposed new neuron growth and death model. Furthermore, we added two types of glial cells for growth and goal rules and also treat as obstacles. Our proposed model and simulator is quite flexible to simulate cell geometry, action potentials, cell connections in each brain region.

Fig. 1
figure33

Simulation results of different geometries and neuron connections. The upper red neurons are simulated as a Purkinje, middle blue neurons are simulated as a Pyramidal cell, and the bottom neurons show connections and created action potentials by expanded H-H model

P86 Studying neural mechanisms in recurrent neural network trained for multitasking depending on a context signal.

Cecilia Jarne

National University of Quilmes and CONICET, Departement of Science and Technology, Quilmes, Argentina

Correspondence: Cecilia Jarne (katejarne@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P86

Most biological brains, as well as artificial neural networks, are capable of performing multiple tasks [1]. The mechanisms through which simultaneous tasks are performed by the same set of units are not yet entirely clear. Such systems can be modular or mixed selective through some variable such as sensory stimulus [2,3]. Based on simple tasks studied in our previous work [4], where tasks consist of the processing of temporal stimuli, we build and analyze a simple model that can perform multiple tasks using a contextual signal. We study various properties of our trained recurrent networks, as well as the response of the network to the damage done in connectivity. In this way we are trying to illuminate those mechanisms similar to those that could occur in biological brains associated with multiple tasks.

References

  1. 1.

    Yang GR, Joglekar MR, Song HF, Newsome WT, Wang XJ. Task representations in neural networks trained to perform many cognitive tasks. Nature Neuroscience. 2019; 22(2): 297-306.

  2. 2.

    Yang GR, Cole MW, Rajan K. How to study the neural mechanisms of multiple tasks. Current Opinion in Behavioral Sciences. 2019; 29: 134-43.

  3. 3.

    Rigotti M, et al. The importance of mixed selectivity in complex cognitive tasks. Nature. 2013; 497(7451): 585-90.

  4. 4.

    Jarne C, Laje R. A detailed study of recurrent neural networks used to model tasks in the cerebral cortex. arXiv preprint arXiv:1906.01094. 2019.

P87 Centrality of left inferior frontal gyrus reveals important aspect of motor learning generalization

Youngjo Song, Sang Jin Jang, Pyeong-soo Kim, Byung Hyug Choi, Jaeseung Jeong

Korea Advanced Institute of Science and Technology, Bio and Brain Engineering, Daejeon, South Korea

Correspondence: Youngjo Song (syj1455@kaist.ac.kr)

BMC Neuroscience 2020, 21(Suppl 1):P87

Generalization of learning refers to the phenomenon in which knowledge learned in one context enhances performance in another context. Although the learning environment can never be precisely the same in the real world, animals including humans demonstrate excellent flexibility to adapt their learned skills in a new environment. Despite the universal occurrence of generalization phenomena in daily life, there is much lack of understanding about how the brain generalizes the skills and knowledge into different environments. In particular, most of the previous studies have only focused on identifying the cerebral networks used during the generalization stage, but failed to determine the elements during the preceding learning stage that could have enabled generalization. Thus, the aim of this study was to enhance understanding of the neural mechanisms that enable generalization, particularly the generalization of motor learning. In this study, we designed a new experimental paradigm called ‘mirror-erasing generalization task.’ The subjects erased (1) a simple shape (square) for the training session, and (2) a complex shape (cursive alphabet letter y) for the generalization session, which took place both before and after the training session, in an MRI scanner. We found that the subjects successfully generalized their motor skills (p < 0.0001) acquired during square-erasing to the letter-erasing context. However, counterintuitively, skill improvement during training did not correlate with the generalization of motor skills (p ~ 0.1). This result implicates that the dynamics underlying generalization is possibly nonlinear, and that performance enhancement in one specific context is not a reliable measure to estimate the generalization performance in another. Then, we computationally modeled the neuronal circuitry responsible for motor learning generalization and used the fMRI machine to construct a functional network model (using frequency between 0.049Hz and 0.09Hz). More interestingly, we found that the betweenness centrality of pars opercularis of left inferior frontal gyrus (IFG) had a significant correlation with the generalization performance (R ~ 0.8, p < 0.05, FWE corrected), which was the only measure before generalization session that correlated with generalization performance. We should note that the human pars opercularis of the left IFG has been considered as part of a mirror neuron system, which is currently hypothesized to be facilitating motor abstraction. This finding suggests that the IFG-mediated abstraction of new motor skill acquired during training may be the key to generalization of the learned skills in a different context. This study potentially provides evidence for the contemporary view that abstraction plays an essential role in generalization. Furthermore, we suggest that the centrality of par opercularis of the left IFG is possibly used to make predictions about future generalization performance, which opens new possibilities in motor rehabilitation. This study suggests that measuring brain functional networks of the patients undergoing rehabilitation programs potentially predict how much their motor function would be improved in real life.

Acknowledgements: This study was conducted as part of Global Singularity Research Program for 2020 financially supported by KAIST.

P88 Phosphorylation-induced variation of NF kinetics and morphology of axon

Zelin Jia, Yinyun Li

Beijing Normal University, School of System Science, Beijing, China

Correspondence: Zelin Jia (zljia@mail.bnu.edu.cn)

BMC Neuroscience 2020, 21(Suppl 1):P88

Neurofilaments (NFs) are transported along microtubule tracks in the axons and are gradually phosphorylated in this process [1]. At the node of Ranvier, the axon is not encased by myelin sheath. It is indicated that phosphorylation of NFs can reduce the transport rate of NFs, and therefore influence the formation of axon morphology [2]. In the theory of slow axonal transport, the “stop-and-go” model [3] can well describe the random kinetic behavior of NFs in axons. On the basis of “stop-and-go” model, we introduce the conversion between phosphorylation and dephosphorylation of neurofilaments and build the “eight-state” model. We assume that the phosphorylation and dephosphorylation of NFs has different “on-track” rate (γon), so as to achieve the effect of phosphorylation on the transport [4,5]. Through our theoretical derivation and simulation, we draw the conclusion that the modification on the “on-track” rate and the conversion between phosphorylation and dephosphorylation can both slow down the NFs transport along the axon. Our conclusion is also consistent with the Continuity equation that flux by the multiplication of the number of NFs and average velocity is constant at equilibrium state.

References

  1. 1.

    Nixon RA, Paskevich PA, Sihag RK, Thayer CY. Phosphorylation on carboxyl terminus domains of neurofilament proteins in retinal ganglion cell neurons in vivo: influences on regional neurofilament accumulation, interneurofilament spacing, and axon caliber. The Journal of cell biology. 1994; 126(4): 1031-46.

  2. 2.

    Jung C, Yabe JT, Shea TB. C-terminal phosphorylation of the high molecular weight neurofilament subunit correlates with decreased neurofilament axonal transport velocity. Brain research. 2000; 856(1-2): 12-9.

  3. 3.

    Wang L, Brown A. Rapid intermittent movement of axonal neurofilaments observed by fluorescence photobleaching. Molecular biology of the cell. 2001; 12(10): 3257-67.

  4. 4.

    Yabe JT, Pimenta A, Shea TB. Kinesin-mediated transport of neurofilament protein oligomers in growing axons. Journal of cell science. 1999; 112(21): 3799-814.

  5. 5.

    Yabe JT, Jung C, Chan WK, Shea TB. Phospho‐dependent association of neurofilament proteins with kinesin in situ. Cell motility and the cytoskeleton. 2000; 45(4): 249-62.

P89 Quantification of changes in motor imagery skill during brain-computer interface use

James Bennett, David Grayden, Anthony Burkitt, Sam John

University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia

Correspondence: James Bennett (bennettj1@student.unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P89

Oscillatory activity over the sensorimotor cortex, known as sensorimotor rhythms, can be modulated by the kinaesthetic imagination of limb movement [1]. These event-related spectral perturbations can be observed in electroencephalography (EEG), offering a potential way to restore communication and control to people with severe neuromuscular conditions via a brain-computer interface (BCI). However, the ability of individuals to produce these modulations varies greatly across the population. Between 10-30% of people are unable to influence their SMRs sufficiently to be distinguishable by a BCI decoder [2]. Despite this, it has been shown that users can be trained to improve the extent of their SMR modulations. This research utilised a data-driven approach to characterise the skill development of participants undertaking a left- and right-hand motor imagery experiment.

Two publicly available motor imagery EEG datasets were analysed. Dataset 1 consisted of EEG data from 47 participants performing 200 trials of left- and right-hand motor imagery within a single session [3]. No real-time visual feedback was provided to the participants. Dataset 2 contained EEG from two sessions of 200 trials each from 54 participants [4]. Visual feedback was provided to users in the second session but not in the first. Various metrics characterising mental imagery skill were calculated across time for each participant.

The discriminability of EEG in the 8-30Hz range from left- and right-hand trials was found to increase across time for both datasets. Despite the overall improvement, there was great variability in the change of motor imagery skill across participants. For Dataset 1, the average change across time of the metric representing the discriminability of classes was 6.0±21.9%. For Sessions 1 and 2 of Dataset 2, the discriminability increased by 11.8±44.0% and 17.4±30.7%, respectively. Session 2 of Dataset 2 contained visual feedback and produced a larger overall improvement in motor imagery skill with a lower variability compared with Session 1.

In this work, we investigated the level of motor imagery skill acquisition during BCI use. The results indicate a baseline level of skill improvement that can be expected, and also emphasise the large variability across participants commonly seen in BCI studies. Overall, we provide a useful reference of BCI skill acquisition for future research that seeks to increase the rate of skill improvement and decrease the amount of variability.

References

  1. 1.

    Pfurtscheller G, Da Silva FL. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clinical Neurophysiology. 1999; 110(11): 1842-57.

  2. 2.

    Allison BZ, Neuper C. Could anyone use a BCI? In Brain-computer interfaces 2010 (pp. 35-54). Springer, London.

  3. 3.

    Cho H, Ahn M, Ahn S, Kwon M, Jun SC. EEG datasets for motor imagery brain–computer interface. GigaScience. 2017; 6(7): gix034.

  4. 4.

    Lee MH, et al. EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy. GigaScience. 2019; 8(5): giz002.

P90 Texture-like neural representations

Emil Dmitruk1, Ritesh Kumar1, Volker Steuber2, Michael Schmuker2, Carina Curto3, Vladimir Itskov3, Shabnam Kadir4

1University of Hertfordshire, School of Engineering and Computer Science, Hatfield, United Kingdom; 2University of Hertfordshire, Biocomputation Research Group, Hatfield, United Kingdom; 3Pennsylvania State University, Department of Mathematics, State College, Pennsylvania, United States of America; 4University of Hertfordshire, Centre for Computer Science and Informatics Research, Hatfield, United Kingdom

Correspondence: Emil Dmitruk (e.dmitruk@herts.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P90

An almost ubiquitous approach taken in systems neuroscience is that of devising stimulus response functions (SRFs), which specify how a stimulus is encoded into a neural response, e.g. place fields, tuning curves [1]. However, it is clear that the brain itself is able to infer properties of the environment in real time via neural activity alone, without having to resort to performing experiments on its own response to stimuli. A central aim in this work is to explore the relationship between the geometric/topological structure of the stimulus space (also animal behaviour) and neural activity and to investigate to what extent the former can be derived from the latter without having to perform the standard operation of constructing dictionaries between the two as provided by an SRF. In the last decade, successful attempts have been made to eliminate this albeit very useful middleman using topological data analysis (TDA) [2,3].

We build on an approach initiated in [3], which uses clique topology, a form of TDA. Common statistics related to neural activity and connectivity derived from experimental data are often presented in the form of a matrix of correlations or connectivity strengths between pairs of neurons, voxels, etc. Analogous statistics can be obtained from stimuli presented to the animal, e.g. textures both visual, auditory, images of natural scenes, olfaction. Clique topology enables us to test whether signatures of structures of stimulus spaces and environments are detectable in the correlation structures of the raw data obtained from neuronal recordings. The advantage of using clique topology over traditional eigenvalue-based methods is that the latter is badly distorted by monotone nonlinearities, whereas the information encoded in the `order complex’ is invariant under such transformations.

The statistical topological approach in [3] could determine whether correlations resulting from both the stimulus and response side were random or induced by a geometric process (e.g. pairwise distances obtained via sampling points from a unit cube in Rd). Here we introduce a new regime of complexes that are derived instead from textures. In [3] experimental scenarios were considered where neurons were tuned to features lying in a continuous coding space where correlations decrease with distance, e.g. hippocampal place cells. Textures are in many ways the antithesis of this and also exhibit both repetitive and random features. Clique topology techniques on textures have led us to a menagerie of order complexes which have very small values for Betti numbers throughout the filtration as compared to same-sized ‘random’ and ‘geometric’ order complexes (Fig. 1). Analogous Betti curves have been shown to have been induced by order complexes derived from low-rank matrices by Curto (unpublished). The matrices that emerge from textures, however, do not generally have low-rank and it is an open question as to whether the two can be related.

Fig. 1
figure34

A An example of matrix ordering (values ordered according to the scale). B Derivation of order complex from a visual texture using Gabor filters. C Order matrices and Betti curves for textures (bricks, water) and olfactory dataset; images of textures are shown as insets on plots. D Order complexes (from A) as a simplicial complexes. E Betti curves for random and geometry matrix

We were surprised to find that datasets from a wider range of modalities appear to exhibit texture-like rather than ‘geometric’ structure. We have extracted texture-like order complexes from olfactory datasets [4] as well as a simulated dataset from a spiking neural network modelling speech recognition.

References

  1. 1.

    Meyer AF, Williamson RS, Linden JF, Sahani M. Models of neuronal stimulus-response functions: elaboration, estimation, and evaluation. Frontiers in Systems Neuroscience. 2017; 10: 109.

  2. 2.

    Curto C, Itskov V. Cell groups reveal structure of stimulus space. PLoS Computational Biology. 2008; 4(10): e1000205.

  3. 3.

    Giusti C, Pastalkova E, Curto C, Itskov V. Clique topology reveals intrinsic geometric structure in neural correlations. Proceedings of the National Academy of Sciences. 2015; 112(44): 13455-60.

  4. 4.

    Si G, et al. Structured odorant response patterns across a complete olfactory receptor neuron population. Neuron. 2019; 101(5): 950-62.

P91 A unified framework for the application and evaluation of different methods for neural parameter optimization

Máté Mohácsi1, Sára Sáray1, Márk P Török2, Szabolcs Kali2

1Pázmány Péter Catholic University, Faculty of Information Technology and Bionics, Budapest, Hungary; 2Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary

Correspondence: Szabolcs Kali (kali@koki.hu)

BMC Neuroscience 2020, 21(Suppl 1):P91

Currently available experimental data make it possible to create complex multicompartmental conductance-based models of neurons. In principle, such models can approximate the behavior of real neurons very well. However, these models have many parameters and some of these parameters often cannot be directly determined in experiments. Therefore, a common approach is to tune parameter values to bring the physiological behavior of the model as close as possible to the experimental data. Rather than tuning the parameters by hand, a more principled way of determining good model parameters is to carry out a systematic parameter search using an appropriate global optimization algorithm. Although many such algorithms have been developed and applied successfully in various domains, and high-quality general implementations of many popular algorithms are available, the majority of these solutions have not been tested in a neural context. Our goal in this study was to create a software tool that provides uniform access to a large variety of different optimization algorithms; to develop a set of benchmark problems for neural parameter tuning; and to systematically evaluate and compare the various algorithms and implementations using our software and benchmarking suite.

We have created an updated and enhanced version of our previously developed software tool. In Optimizer, model evaluations can be performed either by the NEURON simulator (handled internally) or any external (black-box) simulator. All functionalities can be accessed from the graphical user interface; there is also a command line interface for batch processing. The new version was developed in Python 3 to support recent open-source Python modules. The repertoire of algorithms was extended by several new methods that proved effective in other studies. For many of these search algorithms, parallel optimization is also supported. A wide variety of features (including those in the eFEL package) can be used to evaluate the error of the optimization; multiple, weighted features are also supported. Our optimization tool currently supports about fifteen different optimization algorithms implemented by four separate Python packages: Inspyred, Pygmo, BluePyOpt, and Scipy.

Our neural optimization benchmark suite includes six separate problems that differ in complexity, model type, simulation protocol, fitness functions, and the number of unknown parameters. Our examples range from the classical Hodgkin-Huxley model (3 conductance parameters) to an extended integrate-and-fire model (10 parameters) and a morphologically and biophysically detailed hippocampal pyramidal cell (16 parameters). Some of our benchmarks use target data generated by a neuronal model with known parameters. However, in most of our benchmarks, the target data were recorded in physiological experiments, or were generated by more complex models than the one we were fitting.

We then tested the various algorithms on the different model optimization tasks, and compared the final error (after 10,000 model evaluations) and also the convergence speed (Fig. 1). We found that several evolutionary and related search algorithms delivered consistently good results across our entire test suite, even for higher-dimensional, multi-objective problems. Therefore, we would recommend trying these algorithms first for novel optimization problems. We also hope to extend our test suite with new problems and algorithms.

Fig. 1
figure35

Rankings of the algorithms based on their combined performance on our multi-objective test case. Top: Ranking based on the final error after 10,000 model evaluations. Bottom: Ranking based on the area under the error curve (convergence speed). Lower scores indicate better performance. Red bars are for multi-objective algorithms, blue bars for single-objective algorithms

P92 Modelling the responses of ON and OFF retinal ganglion cells to infrared neural stimulation

James Begeng1, Wei Tong2, Michael R Ibbotson2, Paul Stoddart3, Tatiana Kameneva4

1Swinburne University of Technology, Faculty of Science, Engineering and Technology, Melbourne, Australia; 2Australian College of Optometry, The National Vision Research Institute, Melbourne, Australia; 3Swinburne University of Technology, ARC Training Centre in Biodevices, Melbourne, Australia; 4Swinburne University of Technology, Telecommunication Electrical Robotics and Biomedical Engineering, Melbourne, Australia

Correspondence: James Begeng (jbegeng@swin.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P92

Retinal degenerative diseases such as retinitis pigmentosa and age-related macular degeneration cause progressive photoreceptor loss leading to partial or total patient blindness. Retinal prostheses attempt to obviate this loss of photoreceptors by direct stimulation of the underlying retinal ganglion cell (RGC) circuitry, and are capable of restoring limited visual sensation to blind patients. Because these devices typically inject current through implanted electrode arrays, their spatial resolution is significantly limited, and their capacity for selective stimulation of distinct RGC types has not yet been established. In particular, selective stimulation of ON and OFF RGCs (which exhibit opposite light responses in vivo) constitutes a long-standing open problem in retinal prosthesis design.

Infrared neural modulation (INM) uses pulsed infrared light to deliver sharp thermal transients to neural tissue, and is capable of both neural stimulation and inhibition with a high spatial precision. This technique relies on at least two distinct mechanisms: a temperature gradient dependent capacitive current, and thermosensitive activation of the TRPV ion channels. For retinal prostheses, this high stimulus resolution offers an attractive alternative to the low resolution of current electrical prostheses; however, it is unclear how infrared-evoked currents may vary between the wide variety of RGC types in mammalian retina, or whether these differences may be harnessed for selective stimulation.

In this study, a single-compartment Hodgkin-Huxley-type model was simulated in a NEURON environment. The model included leak, sodium, potassium, calcium and low voltage activated calcium currents based on published data [1,2]. Thermally-evoked currents were simulated by a dT/dt dependent capacitive current based on GCS theory of bilayer capacitance [3].

Our results show that INM responses differ between ON and OFF RGCs. In particular, OFF cells have a prolonged depolarisation in response to millisecond timescale heat pulses, whilst ON cells exhibit a short depolarisation with a larger post-pulse hyperpolarisation. This difference is mainly due to the low voltage activated calcium current that is present in OFF and absent in ON RGCs. This prediction is yet to be confirmed experimentally, but may have important implications for the development of infrared retinal prostheses.

References

  1. 1.

    Fohlmeister JF, Miller RF. Impulse encoding mechanisms of ganglion cells in the tiger salamander retina. Journal of Neurophysiology. 1997; 78(4): 1935-47.

  2. 2.

    Wang XJ, Rinzel J, Rogawski MA. A model of the T-type calcium current and the low-threshold spike in thalamic neurons. Journal of Neurophysiology. 1991; 66(3): 839-50.

  3. 3.

    Eom K, Byun KM, Jun SB, Kim SJ, Lee J. Theoretical study on gold-nanorod-enhanced near-infrared neural stimulation. Biophysical Journal. 2018; 115(8): 1481-97.

P93 Investigation of Stimulation Protocols in Transcutaneous Vagus Nerve Stimulation (tVNS)

Charlotte Keatch1, Paul Stoddart2, Elisabeth Lambert3, Will Woods4, Tatiana Kameneva5

1Swinburne University of Technology, Biomedical Engineering, Melbourne, Australia; 2Swinburne University of Technology, ARC Training Centre in Biodevices, Melbourne, Australia; 3Swinburne University of Technology, Department of Health and Medical Sciences, Melbourne, Australia; 4Swinburne University of Technology, Faculty of Health, Arts and Design, Melbourne, Australia; 5Swinburne University of Technology, Telecommunication Electrical Robotics and Biomedical Engineering, Melbourne, Australia

Correspondence: Charlotte Keatch (ckeatch@swin.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P93

Transcutaneous vagus nerve stimulation (tVNS) is a type of non-invasive brain stimulation that is used increasingly in the treatment of a number of different health conditions such as epilepsy and depression. Although there is a great deal of research into different medical conditions that can be improved by tVNS there is little conclusive evidence into the optimal stimulation parameters, such as stimulation frequency, pulse type or amplitude. Understanding whether variation of these stimulation parameters can directly influence the brain response could improve treatment delivery.

The aim of this project is to determine whether varying the stimulation parameters of tVNS can influence the induced brain response, and if there is an optimal set of stimulation parameters that can be determined for targeted treatment of different medical conditions.

Twenty healthy participants were selected based on their suitability for both magnetoencephalography (MEG) and magnetic resonance imaging (MRI) based on predetermined exclusion criteria. The experimental sessions were carried out at the Swinburne Imaging Facility, Swinburne University of Technology. Four different stimulation protocols were delivered via electrical stimulation to the left ear; active stimulation to the cymba concha at stimulation frequency of 24 Hz regular pulses, sham stimulation to the ear lobe at stimulation frequency of 24 Hz regular pulses, stimulation to the cymba concha at stimulation frequency of 1 Hz regular pulses, and stimulation to the cymba concha at stimulation frequency of 24 Hz pulse frequency modulated (PFM) pulses (modulated at 6 Hz).

Participant brain dynamics were analysed in response to stimulation through different signal processing techniques. First the raw data was passed through the software MaxFilter which uses Signal Space Separation (SSS) of Maxwell’s equations to remove major sources of noise and artifacts. The stimulation artifact was then removed from the data by spline interpolation, which removed part of the data from the onset of the stimulation pulse and then interpolated to reconstruct the signal. The data was then downsampled and filtered before applying Fast Fourier Transforms (FFT) to obtain power spectrums at sensor level. The response to different protocols could be contrasted by taking ratios for all participants and was then averaged to see group response at sensor level.

Preliminary results show that results vary between individuals, with different brain areas activated due to the stimulation. Comparison between the active stimulation of the vagus nerve at 24Hz to sham stimulation of the ear lobe shows a dipole response to the active stimulation in the parietal lobe. Comparison of the PFM stimulation with the regular 24 Hz stimulation of the vagus nerve shows an inhibited response in the modulation frequency of 6 Hz in comparison with other frequency bands. Finally, comparing the 1 Hz with the 24 Hz active stimulation of the vagus nerve shows that the 1 Hz stimulation drives the brain more strongly than the 24 Hz across all frequency bands. These preliminary results may be used as a stepping-stone to investigate the effect of tVNS on brain dynamics and setting up stimulation protocols that may have therapeutic effects.

P94 Brain rhythms enhance top-down influence on V1 responses in perceptual learning

Ryo Tani1, Yoshiki Kashimori2

1The University of Electro-Communications, Tokyo, Japan; 2The University of Electro-Communications, Department of Engineering Science, Tokyo, Japan

Correspondence: Ryo Tani (r.tani@uec.ac.jp)

BMC Neuroscience 2020, 21(Suppl 1):P94

Visual information is conveyed in a feedforward manner to progressively higher levels in the hierarchy, beginning with the analysis of simple attributes, such as orientation and contrast, and leading to more complex object features from one stage to the next. In contrast, visual systems have abundant feedback connections, whose number is even larger than the feedforward ones. Top-down influences, conveyed by the feedback pathways across entire brain areas, modulate the responses of neurons in early visual areas, depending on cognition and behavioral context. Li et al. [1] showed that top-down signals allowed neurons of the primary visual cortex (V1) to engage stimulus components that were relevant to a perceptional task and to discard influences from components that were irrelevant to the task. They showed that V1 neurons exhibited characteristic tuning patterns depending on the array of stimulus components. Ramalingam et al. [2] further examined dynamic aspects of V1 neurons in the tasks used by Li et al., and revealed the difference in the dynamic correlations between V1 responses evoked by the two tasks. Using a V1 model, we also proposed the neural mechanism of the tuning modulations by top-down signal [3].Top-down and bottom-up information are processed with different brain rhythms. Fast oscillations such as gamma rhythms are involved in sensory coding and feature binding in local circuits, while slower oscillations such as alpha and beta rhythms are evoked in higher brain areas and may contribute to the coupling of distinct brain areas. In this study, we investigate how information of top-down influence is conveyed by feedback pathway, and how information relevant to task context is coordinated by different brain oscillations. We present a model of visual system which consists of networks of V1 and V2. We consider the two types of perceptual tasks used by Li et al., bisection task and vernier one. We show that visual information relevant to each task context is coordinated by a push-pull effect of top-down signal. We also show that top-down signal reflecting a beta oscillation in V2 neurons, coupled with a gamma oscillation of V1 neurons, enable the efficient gating of task-relevant information in V1. This study provides a useful insight to understanding how rhythmic oscillations in distinct brain areas are coupled to gate task-relevant information encoded in early sensory areas.

References

  1. 1.

    Li W, Piech V, and Gilbert C D. Perceptual learning and top-down influences in primary visual cortex. Nature Neuroscience 2004; 7(6): 651-657.

  2. 2.

    Ramalingam N, McManus JNJ, Li W, and Gilbert CD. Top-Down Modulation of Lateral Interactions in Visual Cortex. Journal of Neuroscience. 2013; 33(5): 1773-1789.

  3. 3.

    Kamiyama A, Fujita K, and Kashimori Y. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex. BioSystems. 2016; 150: 138-148.

P95 A functional role of short-term synapses in maintenance of gustatory working memory in orbitofrontal cortex

Layla Antaket1, Yoshiki Kashimori2

1University of Electro-Communications, Tokyo, Japan; 2The University of Electro-Communications, Department of Engineering Science, Tokyo, Japan

Correspondence: Layla Antaket (a1943002@edu.cc.uec.ac.jp)

BMC Neuroscience 2020, 21(Suppl 1):P95

Taste perception is an important function for life activities, such as ingestion of nutrition and escape of toxic foods. Gustatory information is first processed by taste receptors in the taste buds present in the tongue. After that, it is transmitted to the orbitofrontal cortex (OFC), the hypothalamus, and the amygdala. In the course of a series of information processing processes, the gustatory cortex (GC) processes information on the quality and strength (concentration) of taste itself. Currently, taste research is proceeding with electrophysiological and molecular biological research on receptors. However, the processing mechanism of taste information encoded in each part of the taste transmission pathway is not well understood.

Furthermore, in addition to the higher-order processing of taste information, the OFC, located above the GC, integrates taste information and other sensory information such as tactile sensation, smell, and color to determine the flavor (flavor) of food and guide behavior. We proposed a binding mechanism of taste and odor information in the OFC [1]. A recent study has shown an alternative function of OFC, or working memory function of taste information [2]. The study showed that OFC neurons of the rhesus monkeys encoded a gustatory working memory in a delayed match-to-sample task. OFC neurons exhibited a persistent activity even when a gustatory stimulus presented in the sample period was turned off, whereas neurons of the primary gustatory cortex (GC) did not show a significant persistency of the activity. It is unclear how the gustatory working memory in the OFC is shaped by the interaction between the GC and the OFC.

To address this issue, we focus on a delayed match-to-sample task, in which monkeys have to decide whether the first juice stimulus is the same as the second stimulus separated by a delay period. We develop a model of gustatory system that consists of network models of GC and OFC. Each model of GC and OFC has two-dimensional array of neurons, which encode information of three kinds of foods, orange, guava, and tomato. These network models were based on the Izhikevich neuron model [3] and biophysical synapses mediated by neurotransmitters such as AMPA, NMDA, and GABA. The neural unit consists of a main neuron and an inhibitory interneuron, mutually connected with AMPA and GABA synapses. Main neurons are reciprocally connected with AMPA and NMDA synapses. The NMDA-synaptic connections between these networks are formed by Hebbian learning in a task-relevant way. The gustatory information of three foods is represented by dynamical attractors in the GC and OFC networks. Simulating our model for match/nonmatch trails, we explored the neural mechanism by which the working memory of gustatory information is generated in the OFC. We show that the working memory of gustatory information is shaped by the recurrent activation mediated by short-term synapses of OFC neurons. In addition, we examined how working memory formed by the OFC is used for match/nonmatch decision-making by adding a decision layer to the model.

References

  1. 1.

    Shimemura T, Fujita K, Kashimori Y. A neural mechanism of taste perception modulated by odor information. Chemical Senses. 2016; 41(7): 579-589.

  2. 2.

    Lara AH, Kennerley SW, Wallis JD. Encoding of Gustatory Working Memory by Orbitofrontal Neurons. Journal of Neuroscience. 2009; 29(3): 765-774.

  3. 3.

    Izhikevich EM. Simple Model of Spiking Neurons. IEEE Trans Neural Net. 2003; 14(6): 1569-1572.

P96 Bayesian network change point detection for dynamic functional connectivity

Lingbin Bian, Tiangang Cui, Adeel Razi, Jonathan Keith

Monash University, Melbourne, Australia

Correspondence: Lingbin Bian (lingbin.bian1@monash.edu)

BMC Neuroscience 2020, 21(Suppl 1):P96

We present a novel Bayesian method for identifying the change of dynamic network structure in working memory task fMRI data via model fitness assessment. Specifically, we detect dynamic community structure change-point(s) based on overlapped sliding window applied to multivariate time series. We use the weighted stochastic block model to quantify the likelihood of a network configuration, and develop a novel scoring criterion that we call posterior predictive discrepancy by evaluating the goodness of fit between model and observations within the sliding window. The parameters for this model include latent label vector assigning network nodes to interacting communities, and the block model parameter determining the weighted connectivity within and between communities. The GLM analyses were conducted in both subject level and group level and the contrast between 2-back, 0-back and baseline were used to localise the regions of interest in task fMRI data.

The working memory task fMRI data in the HCP were pre-processed and GLM analyses were applied. With the extracted time series of regions of interest, we propose to use the Gaussian latent block model [1], also known as the weighted stochastic block model (WSBM), to quantify the likelihood of a network and Gibbs sampling to sample a posterior distribution derived from this model. The Gibbs sampling approach we adopt is based on the work of [1,2] for finite mixture models. The proposed model fitness procedure draws parameters from the posterior distribution and uses them to generate a replicated adjacency matrix; then calculates a disagreement matrix to quantify the difference between the replicated adjacency matrix and realised adjacency matrix. For the evaluation of the model fitness, we define a parameter-dependent statistic called the posterior predictive discrepancy (PPD) by averaging the disagreement matrix. Then we compute the cumulative discrepancy energy (CDE) from PPD by applying another sliding window for smoothing and use CDE as a score criterion for change point detection. The CDE increases when change points are contained within the window, and can thus be used to assess whether a statistically significant change point exists within a period of time.

We first applied the algorithm to the synthetic data simulated from the Multivariate Gaussian distribution for validation. We visualise the Gibbs iteration of sampled latent labels and the histogram of the block parameters reflecting the characterisation of the connectivity within and between communities. We then demonstrated the performance of the change point detection with different window sizes. In real working memory task fMRI data analyses, the fixed effects analyses are conducted to estimate the average effect size across runs within subjects at the subject level. At group level, the mixed effects analyses are conducted, where the subject effect size is considered to be random. In this work, we mainly focus on the memory load contrast (2-back vs 0-back, 2-back vs baseline, or 0-back vs baseline).

References

  1. 1.

    Wyse J, Friel N. Block clustering with collapsed latent block models. Statistics and Computing. 2012; 22: 415-428.

  2. 2.

    Nobile A, Fearnside AT. Bayesian finite mixtures with an unknown number of components: the allocation sampler. Statistics and Computing. 2007; 17: 147-162.

P97 Multiscale simulations of ischemia and spreading depolarization with NEURON

Adam Newton1, Michael Hines2, William W Lytton3, Robert McDougal4, Craig Kelley3

1Yale University, Yale School of Public Health, Connecticut, United States of America; 2Yale University, School of Medicine, Connecticut, United States of America; 3SUNY Downstate Medical Center, Department of Physiology and Pharmacology, New York, United States of America; 4Yale University, Department of Neuroscience, Connecticut, United States of America

Correspondence: Adam Newton (adam.newton@yale.edu)

BMC Neuroscience 2020, 21(Suppl 1):P97

Recent improvements and performance enhancements in the NEURON (neuron.yale.edu) reaction-diffusion module (rxd) allow us to model multiple relevant concentrations in the intracellular and extracellular space. The extracellular space is a coarse-grained macroscopic model based on a volume averaging approach, allowing the user to specify both the free volume fraction (the proportion of space in which species are able to diffuse) and the tortuosity (the average multiplicative increase in path length due to obstacles). These tissue characteristics can be spatially dependent to account for regional or pathological differences.

Using a multiscale modeling approach we have developed a pair of models for spreading depolarization at spatial scales from microns to mm, and time scales from ms to minutes. The cellular/subcellular-scale model adapted existing mechanisms for a morphologically detailed CA1 pyramidal neuron together with a simple astrocyte model. This model included reaction-diffusion of K+, Na+, Cl− and glutamate, with detailed cytosolic and endoplasmic reticulum Ca2+ regulation. Homeostatic mechanisms were added to the model, including; Na-K- ATPase pumps, Ca2+ pumps, SERCA, NKCC1, KCC2 and glutamate transporters. We use BluePyOpt to perform a parameter search, constrained by the requirements of realistic electrophysiological responses while maintaining ionic homeostasis. This detailed model was used to explore the hypothesis that individual dendrites have distinct vulnerability to damage due to area-volume ratios leading to different intracellular Ca2+ levels.

At the tissue-scale we adapted a simpler point neurons model, and densely packed them in a coarse-grained macroscopic 3D volume. The models include a simple model for oxygen and dynamic changes in volume fraction. This allows us to model the effect of changes in tissue diffusion characteristics on the wave propagation during spreading depolarization.

Acknowledgments: Research supported by NIH grant R01MH086638.

P98 Bayesian mechanics in the brain under the free-energy principle

Chang S Kim

Chonnam National University, Department of Physics, Gwangju, South Korea

Correspondence: Chang S Kim (cskim@jnu.ac.kr)

BMC Neuroscience 2020, 21(Suppl 1):P98

In the field of neurosciences, the free-energy principle (FEP) stipulates that all viable organisms cognize and behave using probabilistic models embodied in their brain in a manner that ensures their adaptive fitness in the environment [1].

Here, we report on our recent theoretical study that supports the use of the FEP as a more physically plausible theory, based on the principle of least action [2]. We recapitulate the FEP carefully [3] and evaluate that some technical facets in its conventional formalism require reformulation with finesse [4]. Accordingly, we articulate the FEP as living organisms minimize the sensory uncertainty, which is the average surprisal over a temporal horizon, and reformulate the recognition dynamics of the brain’s ability for actively inferring the external causes of sensory inputs. We effectively cast the Bayesian inversion problem in the organism’s brain to find the optimal neural trajectories by minimizing the time integral of the informational free energy (IFE), which is the upper bound of the long-term average surprisal. Specifically, we abstain from i) the non-Newtonian extension of continuous states, which yields the generalized motion, by recursively taking higher-order derivatives of the sensory observation and state equations, and ii) the heuristic gradient-descent minimization of the IFE in a moving frame of reference in a generalized-state space by viewing the nonequilibrium dynamics of brain states as drift-diffusion flows that locally conserve the probability density. The advantage of our formulation is that only bare variables (positions) and their first-order derivatives (velocities) are used in the Bayesian neural computation, thereby dismissing the need for the extra-physical assumptions.

Bare variables are an organism’s representations of the causal environment, and their conjugate momenta resemble the precision-weighted prediction errors in a predictive coding language [5].

Furthermore, we consider the sensory-data-generating dynamics to be nonstationary on an equal footing with intra- and inter-hierarchical-level dynamics in a neuronally based biophysical model.

Consequently, our theory delivers a natural account of the descending predictions and ascending prediction errors in the brain’s hierarchical message-passing structure (Fig. 1).

Fig. 1
figure36

Schematic of the neural circuitry [4], where each cortical level is specified by the perceptual states (S(i),V(i)) and their conjugate momenta (Ps(i),Pv(i)). The prediction error Ps(0) of incoming sensory data at the lowest level induces an inhibitory change in the perceptual momenta (Ps(1),Pv(1)). Subsequently, the prediction error propagates up the hierarchy

The ensuing neural circuitry may be related to the alpha-beta and gamma rhythms that characterize the feedback and feed-forward influences, respectively, in the primate visual cortex [6].

References

  1. 1.

    Friston K. The free-energy principle: a unified brain theory? Nature reviews neuroscience. 2010; 11(2): 127-38.

  2. 2.

    Landau LD, Lifshitz EM. Mechanics: Volume 1 (Course of Theoretical Physics S). 3rd edition. Amsterdam: Elsevier Ltd.; 1976.

  3. 3.

    Buckley CL, Kim CS, McGregor S, Seth AK. The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology. 2017; 81: 55-79.

  4. 4.

    Kim CS. Recognition dynamics in the brain under the free energy principle. Neural computation. 2018; 30(10): 2616-59.

  5. 5.

    Rao RP, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature neuroscience. 1999; 2(1): 79-87.

  6. 6.

    Michalareas G, et al. Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas. Neuron. 2016; 89(2): 384-97.

P99 Integrated model of reservoir computing and autoencoder for explainable artificial intelligence

Hoon-Hee Kim

Korea Advanced Institute of Science and Technology, Daejeon, South Korea

Correspondence: Hoon-Hee Kim (hkma19gm@kaist.ac.kr)

BMC Neuroscience 2020, 21(Suppl 1):P99

Due to the development of machine learning such as a deep neural network, Artificial Intelligence (AI) has been used in many areas. Modern AI technology accurately solves problems such as classification, regression, and prediction, but there is a lack of skill to explain the process of AI decision in terms of human understanding; it is called a black-box AI. The black-box AI, in which humans cannot understand the decision process, is difficult to use in high-risk areas such as important social and legal decisions, medical diagnosis, and financial predictions [1]. Although there are highly explainable machine learning methods such as a decision-tree, these machine learning methods tend to has a low performance and are not suitable for solving a complex problem [2]. In this study, I suggest a novel explainable AI method which has a high performance based on an integrated model of Reservoir Computing and Autoencoder. Reservoir Computing, a recurrent neural network consists of three layers: inputs, reservoir, and readouts can train nonlinear dynamics using linear learning methods [3]. Recently, a study was published in which neural networks induced actual physical laws using Variational Autoencoder which can extract interpretable features of the learning data [4]. In the integrated model, the features of the training data were learned by the autoencoder structure and linear learning rule of reservoir computing. Therefore, these features could be represented as a linear formula form that a human can simply understand. To validate the integrated model, I tested the model to predict trends of the S & P500 index. The model showed more than 80% accuracy and reported that which features were most important to the prediction in terms of weighted linear formula.

Acknowledgments: This study was supported by the National Research Foundation of Korea [NRF-2019R1A6A3A01096892].

References

  1. 1.

    Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence. 2019; 1(5): 206-215.

  2. 2.

    Defense Advanced Research Projects Agency. Broad Agency Announcement, Explainable Artificial Intelligence. 2016. At: https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf

  3. 3.

    Jaeger H, Haas H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science. 2004; 304(5667): 78-80.

  4. 4.

    Iten R, Metger T, Wilming H, Del Rio L, Renner R. Discovering physical concepts with neural networks. Physical Review Letters. 2020; 124(1): 010508.

P100 Role of breathing signals and intra-vIRt connectivity in the generation of vibrissa movements

David Golomb1, Jeffrey Moore2, Arash Fassihi3, Fan Wang4, David Kleinfeld3

1Ben Gurion University, Physiology, Be’er-Sheva, Israel; 2Harvard University, Molecular and Cellular Biology, Cambridge, Massachusetts, United States of America; 3University of California San Diego, Physics, San Diego, California, United States of America; 4Duke University, Neurobiology, Durham, North Carolina, United States of America

Correspondence: David Golomb (golomb@bgu.ac.il)

BMC Neuroscience 2020, 21(Suppl 1):P100

The vIRt nucleus in the medulla, apparently composed of mainly inhibitory neurons, is necessary for whisking rhythm generation. It innervates motoneurons in the facial nucleus (FN) that project to intrinsic vibrissa muscles. The nearby pre-Bötzinger complex (pBötC), which generates inhalation, sends inhibitory inputs to the vIRt nucleus. We explore potential mechanisms of this vIRt synchronization, needed for whisking, using analysis of experimental data and computational modeling. Using time courses of breathing and whisking in rats, we compute the relative amplitude An of n-th whisking cycles within a breathing cycle. For head-restrained rats, the average values of An/A1 are 0.56, 0.46 and 0.43 of the amplitudes for n = 2, 3, 4. For freely behaving rats, the value for A2/A1is 0.79. The observations that A1 is larger than subsequent amplitudes suggests that the recovery of vIRt neurons from inhibition by the pBötC contributes to the synchronization of vIRt neurons. Lower-amplitude periodic whisking, however, can occur after decay of the pBötC signal. To explain how vIRt network generates these “intervening” whisks, and why the amplitude A1is larger than the following amplitudes, we construct and analyze a conductance-based model of the vIRt circuit composed of hypothetical two groups, vIRtr and vIRtp, of bursting inhibitory neurons with spike-frequency adaptive currents and constant external excitation. Only neurons in vIRtr are inhibited by the pBötC and inhibit FN motoneurons. We denote the strengths of the inhibitory conductances between neurons across and within each group by gas and gs, respectively. If gas is larger than gs, the two groups burst alternately, as observed experimentally. If gas is too large, however, one group is active and the second is silent. The oscillation amplitude depends linearly on the constant external excitation, and the period increases as a function of gas-gs. Thus, the external input to the circuit and the level of inhibition within the circuit control the amplitude and frequency of the intervening whisking, respectively. Our model thus provides a means to control the wide range of whisking frequencies observed in experiments.

Acknowledgements: supported by NIH grant 5U19NS107466-02.

P101 Representing predictability of sequence patterns in a random network with short-term plasticity

Vincent S C Chien, Richard Gast, Burkhard Maess, Thomas Knösche

Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Correspondence: Thomas Knösche (knoesche@cbs.mpg.de)

BMC Neuroscience 2020, 21(Suppl 1):P101

The brain is capable of recognizing repetitive acoustic patterns within a few repetitions, which is essential for the timely identification of sound objects and the prediction of upcoming sounds. Several studies found neural correlates regarding the predictability of sequence patterns, but the underlying neural mechanism is not yet clear. To investigate the mechanism supporting the fast emergence of the predictive state, we use neural mass modeling to replicate the experimental observations during the sequential repetition [1]. First, we investigated the effect of short-term plasticity (STP) to the response of a Wilson-Cowan node to a prolonged stimulus, where the node consists of an excitatory (E) and an inhibitory (I) population. In total, 27 combinations of plasticity settings were examined, where the plasticity types include short-term depression (STD), short-term facilitation (STF), and no STP, and the connection types include E-to-E, E-to-I, I-to-E, and I-to-I connections. The simulated signals that best explain the observed MEG temporal profiles (i.e., an onset peak followed by a rising curve) rely on the setting where STD is applied on E-to-E connection and STF applied on E-to-I connection. Second, with the preferred plasticity settings (i.e., STD on E-to-E and STP on E-to-I), we simulated the dynamics of a random network in response to regular (REG) and random (RAND) sequences in PyRates [2]. The simulated signals can reproduce several experimental observations, including the above-mentioned MEG temporal profiles, the predictability-dependent MEG amplitude (i.e., dependency in terms of regularity and alphabet size of the input sequence), as well as the MEG responses in the switch conditions (i.e., from REG to RAND, and from RAND to REG). Third, we used a simplified two-level network to illustrate the main mechanisms supporting such representation of predictability during the sequential repetition. The simplified network consists of nodes that are selective to sound tone (level 1) and nodes that are selective to tone direction (level 2). The simulation reveals higher firing rates of I populations level-2 nodes during REG than RAND condition, which contributes to stronger simulated MEG amplitude via I-to-E connections (Fig 1). In conclusion, we provide a possible mechanism to account for the experimental observations. First, the increased MEG amplitude is mainly due to increased inhibitory activities. Second, the effect of alphabet size is due to two forms of STP (i.e., STD on E-to-E and STF on E-to-I). Third, the effect of regularity relies on the inclusion of the 2nd-level nodes that sparsely encodes the repetitive patterns. In short, the more predictable sequence patterns cause a stronger accumulation of inhibitory activities in direction-selective areas via STP, which in turn leads to a higher MEG amplitude. This mechanism emphasizes the need for STP at each stage of the bottom-up process, whereas the involvement of top-down processes is not necessary.

References

  1. 1.

    Barascud N, Pearce MT, Griffiths TD, Friston KJ, Chait M. Brain responses in humans reveal ideal observer-like sensitivity to complex acoustic patterns. Proceedings of the National Academy of Sciences. 2016; 113(5): E616-25.

  2. 2.

    Gast R, et al. PyRates—A Python framework for rate-based neural simulations. PloS one. 2019; 14(12): e0225900.

P102 Ephaptic coupling in white matter fibre bundles modulates axonal transmission delays

Helmut Schmidt1, Gerald Hahn2, Gustavo Deco3, Thomas Knösche1

1Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks, Leipzig, Germany; 2Universitat Pompeu Fabra, Computational Neuroscience Group, Barcelona, Spain; 3Universitat Pompeu Fabra, Barcelona, Spain

Correspondence: Thomas Knösche (knoesche@cbs.mpg.de)

BMC Neuroscience 2020, 21(Suppl 1):P102

Axonal connections are widely regarded as faithful transmitters of neuronal signals with fixed delays. The reasoning behind this is that local field potentials (LFPs) caused by spikes travelling along axons are too small to have an effect on other axons. We demonstrate that, although the local field potentials generated by single spikes are of the order of microvolts, the collective local field potential generated by spike volleys can reach several millivolts. As a consequence, the resulting depolarisation of the axonal membranes (i.e. ephaptic coupling) increases the velocity of spikes, and therefore reduces axonal transmission delays between brain areas.

We first compute the local field potential using the line approximation [1,2] for a spike in a single axon. We find that it generates an LFP with about 20 microvolts amplitude, which is too weak to have a significant effect on neighbouring axons (Fig. 1A). Next, we extend this formalism to fibre bundles to compute the LFP generated by spike volleys, with different levels of synchrony. Such spike volleys can generate LFPs with amplitudes of several millivolts (Fig. 1B), and the amplitude of the LFP depends strongly on the level of synchrony of the spike volley. Finally, we devise a spike propagation model in which the LFPs generated by spikes modulate their propagation velocity. This model reveals that with increasing number of spikes in a spike volley, the axonal transmission delays decrease (Fig. 1C). To the best of our knowledge, this study is the first that investigates the effect of LFPs on axonal signal transmission in macroscopic fibre bundles. The main result is that axonal transmission delays decrease if spike volleys are sufficiently large and synchronous. This is in contrast to studies investigating ephaptic coupling between spikes at the microscopic level (e.g. [3]), which have used a different model setup that resulted in increasing axonal transmission delays. Our results are a possible explanation for the decreasing stimulus latency with increasing stimulus intensity observed in many psychological experiments (e.g. [4]). We speculate that the modulation of axonal transmission delays contributes to the flexible synchronisation of high frequency oscillations (e.g. gamma oscillations).

Fig. 1
figure37

A Spike profile (top) and resulting LFP in a single axon (bottom). B Spike profile (top) and LFP generated by fully synchronised spike volley (bottom). C Axonal delays as function of number of spikes and bundle radius in the presence (solid) and absence (dashed) of ephaptic coupling

Acknowledgements: This work has been supported by the German Research Foundation (DFG), SPP2041.

References

  1. 1.

    Holt GR, Koch C. Electrical interactions via the extracellular potential near cell bodies. Journal of Computational Neuroscience. 1999; 6 :169-184.

  2. 2.

    McColgan T, et al. Dipolar extracellular potentials generated by axonal projections. eLife. 2017, 6: e25106.

  3. 3.

    Binczak S, Eilbeck JC, Scott AC. Ephaptic coupling of myelinated nerve fibres. Physica D. 2001; 148: 159-174

  4. 4.

    Ulrich R, Rinkenauer G, Miller J. Effects of stimulus duration and intensity on simple reaction time and response force. Journal of Experimental Psychology. 1998; 24: 915-928.

P103 Constructing model surrogates of populations of dopamine neuron and medium spiny neuron models for understanding phenotypic differences

Tim Rumbell1, Sushmita Allam2, Tuan Hoang-Trong2, Jaimit Parikh2, Viatcheslav Gurev2, James Kozloski2

1IBM Research, Healthcare and Life Sciences, Raleigh, North Carolina, United States of America; 2IBM Research, Yorktown Heights, New York, United States of America

Correspondence: Tim Rumbell (timrumbell@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P103

Neurons of a specific type have intrinsic variety in their electrophysiological properties. Intracellular parameters, such as ion channel conductances and kinetics, also have high variability within a neuron type, yet reliable functions emerge from a wide variety of parameter combinations. Recordings of electrophysiological properties from populations of neurons under different experimental conditions or perturbations produce sub-groups that form “electrophysiological phenotypes”. For example, different properties may derive from wild-type vs. disease model animals or may change across multiple age groups [1].

Populations of neuron models can represent a neuron type by varying parameter sets, each able to produce the outputs of a recording, and all spanning the ranges of recorded features. We previously generated model populations using evolutionary search with an error function that combines soft-thresholding with a crowdedness penalty in feature space, allowing coverage of the empirical range of features with models. The technique was used to generate a population of dopamine neuron (DA) models, which captured the majority of empirical features, generalized to perturbations, and revealed sets of coefficients predicted to reliably modulate activity [2]. We also used this technique to construct striatal medium spiny neuron (MSN) model populations, which recapitulated the effects of extracellular potassium changes [3] and captured differences in electrophysiological phenotype between MSNs from wild-type mice and from the Q175 model of Huntington’s disease. Our approach becomes prohibitively computationally expensive, however, when we seek to produce multiple populations that represent many phenotypes from across a spectrum. For example, to recreate the non-linear developmental trajectory observed across postnatal development of DAs [1] we would need to perform multiple optimizations.

Here we demonstrate the construction of model surrogates that map model parameters to features spanning the range of multiple electrophysiological phenotypes. We sampled from parameter space and simulated models to create a surrogate training set. Using our evolutionary search as prior knowledge of our parameter space enabled a dense sampling in regions of the high-dimensional model parameter space that were likely to produce valid features. We trained a deep neural network with our datasets, producing a surrogate for our model that maps parameter set distributions to output feature distributions. This can be used in place of the neuron model for model sampling, allowing rapid construction of populations of models that match different distributions of features from across multiple phenotypes. We demonstrate this approach using DA developmental age groups and MSN disease progression states as targets, facilitating a mechanistic understanding of parameter modulations that generate differences in phenotypes.

References

  1. 1.

    Dufour MA, Woodhouse A, Amendola J, Goaillard JM. Non-linear developmental trajectory of electrical phenotype in rat substantia nigra pars compacta dopaminergic neurons. Elife. 2014; 3: e04059.

  2. 2.

    Rumbell T, Kozloski J. Dimensions of control for subthreshold oscillations and spontaneous firing in dopamine neurons. PLoS Computational Biology. 2019; 15(9): e1007375.

  3. 3.

    Octeau JC, et al. Transient, consequential increases in extracellular potassium ions accompany channelrhodopsin2 excitation. Cell reports. 2019; 27(8): 2249-61.

P104 Integrated cortical, thalamic and basal ganglia model of brain function: validation against functional requirements

Sébastien Naze1,2, James Kozloski2

1IBM Research, Melbourne, Australia; 2IBM Research, Yorktown Heights, New York, United States of America

Correspondence: Sébastien Naze (sebastien.naze@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P104

Large scale brain models encompassing cortico-cortical, thalamo-cortical and basal ganglia processing are fundamental to understand the brain as an integrated system in healthy and disease conditions but are complex to analyze and interpret. Neuronal processes are typically segmented by region and modality in order to explain an experimental observation at a given scale and then integrated to a global framework [1]. Here, we present a set of functional requirements applied to validate the recently developed IBEx model [2] against a learning task involving coordinated activity across cortical and sub-cortical regions in a brain-computer interface (BCI) context involving volitional control of a sensory stimulus [3]. The original IBEx model comprises interacting modules for supra-granular, infra-granular cortical layers, thalamic integration, basal ganglia parallel processing and dopamine-mediated reinforcement learning. We decompose and analyze each subsystem in the context of the BCI learning task whereby parameters are tuned to comply to its functional requirements. Intermediate conclusions are presented for each subsystem according to the constraints imposed to satisfy the requirements, before re-incorporating the subsystem in the global framework. Consequences of model modifications and parameter tuning are assessed at the scales of the subsystem and the whole brain system. The relation between infra-granular spiking activity in different cortical regions, thalamo-cortical delta rhythms and higher-level description of cognitive or motor trajectories (according to the brain region) is displayed. The relation to phenotypes associated to Huntington’s disease is exposed and the framework is discussed in perspective to other state-of-art integrative efforts to understand complex high-order brain functions [4,5].

References

  1. 1.

    Eliasmith C, Trujillo O. The use and abuse of large-scale brain models. Current Opinion in Neurobiology. 2014; 25: 1-6.

  2. 2.

    Kozloski J. Closed-loop brain model of neocortical information-based exchange. Frontiers in Neuroanatomy. 2016; 10: 3.

  3. 3.

    Koralek AC, Jin X, Long II JD, Costa RM, Carmena JM. Corticostriatal plasticity is necessary for learning intentional neuroprosthetic skills. Nature. 2012; 483(7389): 331-5.

  4. 4.

    Oizumi M, Albantakis L, Tononi G. From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0. PLoS Computational Biology. 2014; 10(5): e1003588.

  5. 5.

    Mashour GA, Roelfsema P, Changeux JP, Dehaene S. Conscious processing and the global neuronal workspace hypothesis. Neuron. 2020; 105(5): 776-98.

P105 Generating executable mushroom body and lateral horn circuits from the hemibrain dataset with FlyBrainLab

Aurel A Lazar, Mehmet K Turkcan, Yiyin Zhou

Columbia University, Electrical Engineering, New York, United States of America

Correspondence: Yiyin Zhou (yiyin@ee.columbia.edu)

BMC Neuroscience 2020, 21(Suppl 1):P105

We introduce executable circuits for two higher order olfactory processing centers in the brain of Drosophila melanogaster implicated in learning and memory, the lateral horn (LH) and mushroom body (MB). Despite the large amounts of data available on these two neuropils, there is a dearth of executable neural circuit models governing learning and memory in the MB and the LH. We use the recently-released Hemibrain dataset [1] as a biological basis of our implementations to analyze cell types, numbers and connectivity for realizing our computational models for both neuropils. Our implementations utilize the FlyBrainLab environment [2] to deliver the capability to visualize neural circuits morphologically or with diagrams, can run on GPUs, and are designed to facilitate customization of neuron and synapse models at a per-cell and per-cell type level.

We first study the neural types and connectivity of mushroom body neurons, a neuropil that implements the capability for associative learning. We model individual lobes that comprise the mushroom body and each of their so-called compartments, along with connectivities between the so-called Kenyon cells (KCs), mushroom body output neurons (MBONs) and dopaminergic neurons (DANs), thereby providing an interactive circuit diagram that allows for customizable ablation or activation experiments (Fig. 1). Implementing an interface between the input to the MB from antennal lobe projection neurons (PNs), we utilize the observed PN-to-KC connectivity and provide comparisons against randomly generated instantiations of PN-to-KC connectivity, a long standing hypothesis about the nature of PN-to-KC connectivity.

Fig. 1
figure38

Visualization and exploration of the entire MB circuit in the Hemibrain dataset using FlyBrainLab (top). Customizable circuit diagram of the mushroom body (bottom)

Second we investigate the connectivity within the LH, a neuropil associated with innate memories, and the connectivity between the MB and the LH. We analyze in detail the connectivity between PNs, lateral horn local neurons and output neurons. We construct an executable circuit of the LH based on our analysis.

The implementations we provide are a step towards building integrated models of sensory systems derived from biological data. Our approach showcases the impact an integrative ecosystem can have on building executable circuit models for understanding the functional logic of neurocomputation in the brain of model organisms.

Acknowledgments: The research reported here was supported by DARPA under contract #HR0011-19-9-0035.

References

  1. 1.

    Xu CS, et al. A connectome of the adult drosophila central brain. BioRxiv. 2020.

  2. 2.

    Turkcan MK, et al. FlyBrainLab: an interactive computing environment for the fruit fly brain. Society for Neuroscience Meeting. 2019.

P106 Efficient communication in distributed simulations of spiking neuronal networks with gap junctions

Jakob Jordan1, Moritz Helias2, Markus Diesmann2, Susanne Kunkel3

1University of Bern, Department of Physiology, Bern, Switzerland; 2Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6, INM-10) and Institute for Advanced Simulation (IAS-6), Jülich, Germany; 3Norwegian University of Life Sciences, Faculty of Science and Technology, Ås, Norway

Correspondence: Markus Diesmann (diesmann@fz-juelich.de)

BMC Neuroscience 2020, 21(Suppl 1):P106

Investigating the dynamics and function of large-scale spiking neuronal networks with realistic numbers ofsynapses is made possible today by state-of-the-art simulation code that scales to the largest contemporary supercomputers. These implementations exploit the delayed and point-event like nature of the spike interaction between neurons. In a network with only chemical synapses the dynamics of all neurons is decoupled for the duration of the minimal synaptic transmission delay such that the dynamics of each neuron can be propagated independently for the duration of the minimal delay without requiring information from other neurons. Hence, in distributed simulations of such networks, compute nodes need to communicate spike data only after this period [1].

Electrical interactions, also called gap junctions at first seem to be incompatible with such a communication scheme as they couple membrane potentials of pairs of neurons instantaneously. Hahne et al. [2] however demonstrate that communication of spikes and gap-junction data can be unified using waveform-relaxation methods [3]. Despite these advances, simulations involving gap junctions scale only poorly due to a communication scheme that collects global data on each compute node. In comparison to chemical synapses, gap junctions are far less abundant. To improve scalability we exploit this sparsity by integrating the existing framework for continuous interactions with a recently proposed directed communication scheme for spikes [4]. Using a reference implementation in the NEST simulator (www.nest-simulator.org, [5]) we demonstrate excellent scalability of the integrated framework, accelerating large-scale simulations with gap junctions by more than an order of magnitude. This allows, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.

Acknowledgements: Partly supported by Helmholtz young investigator group VH-NG-1028, European Union’s Horizon 2020 funding framework under grant agreement no. 785907 (Human Brain Project HBP SGA2) and no. 754304 (DEEP-EST), and Helmholtz IVF no. SO-092 (Advanced Computing Architectures, ACA). We would also like to acknowledge use of the JURECA supercomputer through VSR grant JINB33.

References

  1. 1.

    Morrison A, Mehring C, Geisel T, Aertsen AD, Diesmann M. Advancing the boundaries of high-connectivity network simulation with distributed computing. Neural Computation. 2005; 17(8): 1776-801.

  2. 2.

    Hahne J, et al. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations. Frontiers in Neuroinformatics. 2015; 9: 22.

  3. 3.

    Lelarasmee E, Ruehli AE, Sangiovanni-Vincentelli AL. The waveform relaxation method for time-domain analysis of large scale integrated circuits. IEEE transactions on computer-aided design of integrated circuits and systems. 1982; 1(3): 131-45.

  4. 4.

    Jordan J, et al. Extremely scalable spiking neuronal network simulation code: from laptops to exascale computers. Frontiers in Neuroinformatics. 2018; 12: 2.

  5. 5.

    Gewaltig MO, Diesmann M. Nest (neural simulation tool). Scholarpedia. 2007; 2(4): 1430.

P107 Inferring parameters of DBS-induced short-term synaptic plasticity from in-vivo recordings of human brain

Alireza Ghadimi1, Luka Milosevic2, Suneil Kalia3, Mojgan Hodaie3, Andres Lozano3, William Hutchison4, Milos Popovic1, Milad Lankarany1

1University of Toronto, Institute of Biomaterials and Biomedical Engineering, Toronto, Canada; 2University of Tübingen, Department of Neurosurgery, Tübingen, Germany; 3University of Toronto, Department of Surgery, Toronto, Canada; 4University of Toronto, Department of Physiology, Toronto, Canada

Correspondence: Alireza Ghadimi (alireza.ghadimi@mail.utoronto.ca)

BMC Neuroscience 2020, 21(Suppl 1):P107

Background: Short-term synaptic plasticity (STP) is the dynamic of change in the synaptic weight with respect to the presynaptic spiking activity. Recent studies showed that STP is involved in several brain processes. Several computational works have been done on inferring STP parameters [1-4]; however, these studies utilized in-vitro signals of intracellular recordings (except Ghanbari et al.) from rodents to create their models, which is not necessarily representative of in-vivo human brain dynamics. To this end, we developed a parameter inference method which estimates parameters of STP induced by DBS, verified by experimental data obtained from intracranial recordings of single-neuron activity in surgical patients.

Method: To acquire spiking activity, two closely spaced microelectrodes were placed in the thalamic ventral intermediate nucleus (Vim) during awake DBS surgeries. One electrode was used to record single-unit spiking activity, while the other was used to deliver stimulation pulses at various frequencies (5Hz, 20Hz, 30Hz, 50Hz, 100Hz, 200Hz). The included data were collected from 15 patients [5]. The narrow stimulus pulses were removed, after which data were high-pass filtered to better isolate single unit activity. Using stimulus artifacts as triggers, we extracted the instantaneous firing rate of neurons in response to each DBS pulse of the 5Hz stimulation trains, and averaged over all inter-pulse intervals. Due to the low stimulation rate (i.e. 5Hz), no STP is induced in this data. The resultant waveform is equivalent to the impulse response.

In order to mimic STP behavior, we used the Tsodyks-Markram phenomenological model, which generates the postsynaptic current according to the spiking history of the presynaptic neuron [6]. To reconstruct the firing rate induced by DBS, we give the pulse train of DBS as the input of the Tsodyks-Markram model. The model generates a postsynaptic current in response to DBS pulses. We use this response to make modulated pulse trains that represent the effect of STP by changing the amplitude of each pulse. The modulated pulse train is convolved with the impulse response of the neuron in order to make an estimation of the DBS-induced firing rate. The estimated firing rate is compared with the experimental instantaneous firing rates throughout the stimulation trains at each of the other stimulation frequencies.

To achieve the true parameters of the Tsodyks-Markram model we should minimize the error between experimental and estimated firing rate. True parameters should be valid for all frequencies, therefore we define the error function as the average error of 30Hz, 50Hz, 100Hz, and 200Hz frequencies. To minimize the error, we employ Bayesian Adaptive Direct Search [7], which is a fast non-derivative optimization algorithm. The optimization algorithm should select parameters such that the output of the model most accurately represents the dynamic changes which occur to the synaptic weights induced by each individual successive stimulus pulse throughout individual stimulation trains.

Results: The figure below (Fig. 1) shows the experimental data versus the model output generated by the parameter estimation algorithm. The estimated parameters show a good match for all frequencies, verifying the validity of this approach. Overall, the results suggest that this method can be used for the assessment of STP dynamics of in-vivo human neuronal recordings.

Fig. 1
figure39

The output of the model with estimated parameters that shows matching between experimental firing rate extraxted during DBS surgery (black), and the model output (blue)

References

  1. 1.

    Ghanbari A, Malyshev A, Volgushev M, Stevenson IH. Estimating short-term synaptic plasticity from pre-and postsynaptic spiking. PLoS Computational Biology. 2017; 13(9): e1005738.

  2. 2.

    Costa RP, Sjostrom PJ, Van Rossum MC. Probabilistic inference of short-term synaptic plasticity in neocortical microcircuits. Frontiers in Computational Neuroscience. 2013; 7: 75.

  3. 3.

    Bird AD, Wall MJ, Richardson MJ. Bayesian inference of synaptic quantal parameters from correlated vesicle release. Frontiers in Computational Neuroscience. 2016; 10: 116.

  4. 4.

    Bykowska OS, Gontier C, Sax AL, Jia DW, Llera-Montero M, Bird AD, Houghton CJ, Pfister JP, Costa RP. Model-based inference of synaptic transmission. Frontiers in Synaptic Neuroscience. 2019; 11: 21.

  5. 5.

    Milosevic L, et al. Deciphering Mechanism of DBS-induced Synaptic Plasticity in Different Sub-structures of Basal Ganglia Network BGN. [in prep]

  6. 6.

    Tsodyks MV, Markram H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the National Academy of Sciences. 1997; 94(2): 719-23.

  7. 7.

    Acerbi L, Ji W. Practical Bayesian optimization for model fitting with Bayesian adaptive direct search. In: Advances in neural information processing systems. 2017; 1836-1846.

P108 Ensemble empirical mode decomposition of noisy local filed potentials from optogenetic data

Sorinel Oprisan1, Xandre Clementsmith2, Tamas Tompa3, Antonieta Lavin4

1College of Charleston, Department of Physics and Astronomy, Charleston, South Carolina, United States of America; 2College of Charleston, Psychology, Charleston, South Carolina, United States of America; 3University of Miskolc, Department of Preventive Medicine, Miskolc, Hungary; 4Medical University of South Carolina, Neuroscience, Charleston, South Carolina, United States of America

Correspondence: Sorinel Oprisan (oprisans@cofc.edu)

BMC Neuroscience 2020, 21(Suppl 1):P108

Gamma rhythms with frequencies over 30 Hz are thought to reflect cortical information processing. However, signals associated with gamma rhythms are notoriously difficult to record due to their low energy and relatively short duration of the order of a few seconds. In our experiments, of particular interest was the 40 Hz synchronization of neurons, which is believed to be indicative of temporal binding. Temporal binding glues together spatially distributed representations of different features of sensory input, e.g., during the analysis of a visual stimulus, to produce a coherent description of constituent elements.

Our goal was to investigate the effect of systemic cocaine injection on the local field potentials (LFPs) recorded from the medial prefrontal cortex (mPFC) of mice. We used male PV-Cre mice infected with a viral vector [1] that makes some proteins sensitive to light, such as the members of the opsin family, including retinal pigments in visual systems. By genetic engineering, channelrhodopsins was coupled to sodium channels express in neurons and increase their excitability when exposed to blue light [1].

We previously used nonlinear dynamics tools, such as delay embedding and false nearest neighbors [2,3], to estimate the embedding dimension and the delay time for attractor reconstruction from LFPs [4]. While nonlinear dynamics is a powerful tool for data analysis, recent developments suggested that ensemble empirical mode decomposition (EEMD) could be better suited for short and noisy time series. The traditional EMD method is a data-driven decomposition of the original data into orthogonal Intrinsic Mode Functions (IMFs) [5]. In the presence of noise, the time scale separation is not perfect, and IMF mixing produces significant energy leaks between modes. The advantage of EEMD is that by adding a controlled amount of noise to the data leads to a between demixing of the IMFs. We also performed a Hilbert-Huang [5] transform to the demixed IMFs and computed the instantaneous frequency spectrum. Our results indicate that cocaine significantly shifts the energy distribution towards earlier durations during the trial compared to control. Our findings allow us to estimate the contribution of different spectral components quantitatively and develop a dynamical model of the data.

References

  1. 1.

    Dilgen JE, Tompa T, Saggu S, Naselaris T, Lavin A. Optogenetically evoked gamma oscillations are disturbed by cocaine administration. Frontiers in Cellular Neuroscience. 2013; 7: 213.

  2. 2.

    Oprisan SA, Lynn PE, Tompa T, Lavin A. Low-dimensional attractor for neural activity from local field potentials in optogenetic mice. Frontiers in Computational Neuroscience. 2015; 9: 125.

  3. 3.

    Oprisan SA, Imperatore J, Helms J, Tompa T, Lavin A. Cocaine-induced changes in low-dimensional attractors of local field potentials in optogenetic mice. Frontiers in Computational Neuroscience. 2018; 12: 1.

  4. 4.

    Oprisan SA, Clementsmith X, Tompa T, Lavin A. Dopamine receptor antagonists effects on low-dimensional attractors of local field potentials in optogenetic mice. PLoS ONE. 2019; 14(10): e0223469.

  5. 5.

    Huang NE, et al. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Nonstationary Time Series Analysis. Proceedings of the Royal Society of London A. 1998; 454: 903-995.

P109 Investigating water transport mechanisms in astrocytes with high dimensional parameter estimation

Pierre-Louis Gagnon1, Kaspar Rothenfusser2, Nicolas Doyon3, Pierre Marquet4, François Laviolette5

1Université Laval, Département d’Informatique et Génie Logiciel, Quebec, Canada; 2Centre hospitalier universitaire vaudois, Département de psychiatrie, Lausanne, Switzerland; 3Centre hospitalier universitaire vaudois, Department of Mathematics and Statistics, Quebec, Canada; 4Université Laval, Department of Psychiatry and Neuroscience, Quebec, Canada; 5Université Laval, Department of Software Engineering and Computer Science, Quebec, Canada

Correspondence: Pierre-Louis Gagnon (pierre-louis.gagnon.1@ulaval.ca)

BMC Neuroscience 2020, 21(Suppl 1):P109

Holographic microscopy allows one to measure subtle volume changes of cells submitted to challenges such as an osmotic shock or sudden increase in extracellular potassium. Interpreting volumetric data however remains a challenge. Specifically, relating the amplitude of volume changes to the biophysical properties of cells such as passive permeability to water or the rate of water transport by cation chloride cotransporter is a difficult but important task. Indeed, mechanisms of volume regulation are key for cell resilience and survival. Experimentally, the second author measured the volume response as well as the change in sodium concentration of astrocytes submitted to bath applied: hypo-osmotic solutions, solutions with high potassium concentration or solutions containing glutamate. Overall, he measured the time course of the response of over 2000 astrocytes. In order to interpret this rich data, we developed a mathematical model based on our biophysical knowledge of astrocytes. This model relates on the one hand the experimental perturbations of the extracellular medium and on the other the properties of the cell such as its various conductances or strengths of transporters to its responses in terms of volume change, changes in ionic concentrations and in membrane potential. Determining the biophysical properties of cells thus boils down to a problem of model calibration. This presentation is mainly focused on the work of the first author who designed and implemented a gradient-based optimization algorithm, to estimate model parameters and find the values of the parameters which best explain the data coming from distinct modalities and astrocytes.

A first computational challenge is to combine data from different modalities. In some experiments, the sodium response is measured while in others, the volume response is inferred from phase measurements. We also take advantage of the fact that expert knowledge provides information on variables which are not measured. For example, even if membrane potential is not measured, we impose that it is between -100 mV and -50 mV at equilibrium. Combining these different information sources translate into a complex loss function. Furthermore, using a priori knowledge on the value of parameters, we developed a Bayesian approach. Another challenge comes from the fact that different measurements come from different cells. Our goal is thus not to infer a single set of parameters but rather to infer how biophysical parameters are distributed within the population of cells. This was achieved by using a Tikhonov approach which penalizes parameter values laying far from the average of the distribution.

With our algorithm, we were able to infer the strength of the sodium potassium ATPase pump in each cell with a good precision. This could be useful in identifying cells which are more vulnerable. Parameters related to water transport such the passive membrane permeability to water or the rate of water transport through cation chloride cotransporters are elusive and cannot be determined by conventional methods. Our inference algorithms provided information on these values. Finally, our algorithm is flexible enough to adapt rapidly to take advantage of new experiment type or new data modality.

P110 Synchronization and resilience in the Kuramoto white matter network model with adaptive state-dependent delays

Daniel Park1, Jeremie Lefebvre2

1University of Toronto, Department of Mathematics, Toronto, Canada; 2University of Ottawa, Department of Biology, Ottawa, Canada

Correspondence: Daniel Park (seonghyun.park@mail.utoronto.ca)

BMC Neuroscience 2020, 21(Suppl 1):P110

Myelin sheaths around axonal lengths are formed by mature oligodendrocytes, and play a critical part in regulating signal transmission in the nervous system. Contrary to traditional assumptions, recent experiments have revealed that myelin remodels itself in an activity-dependent way, during both developmental stages and well into adulthood in mammalian subjects. Indeed, it has shown that myelin structure is affected by extrinsic factors such as one’s social environment and intensified learning activity. As a result, axonal conduction delays continuously adjust in order to regulate the timing of neural signals propagating between different brain regions. While there is strong empirical support for such phenomena, the plasticity mechanism has yet to be extensively modeled in neurocomputational fields. As a preliminary step, we incorporate adaptive myelination in the form of state-dependent delays into neural network models, and analyze how it consequently alters its dynamics. In particular, we ask what role myelin plasticity plays in brain synchrony, which is a fundamental element of neurological function. Brain synchrony is simplistically represented in coupled phase-oscillator models such as the Kuramoto network model. As a prototype, we equip the Kuramoto model with a distribution of variable delays governed by a plasticity rule with phase difference gain that allows the delays and oscillatory phases to evolve over time with mutually dependent dynamics. We analyzed the equilibria and stability of this system, and applied our results to large dimensional networks. Our joint mathematical and numerical analysis demonstrates that plastic delays act as a stabilizing mechanism promoting the network’s ability to maintain synchronous activity. At a high-dimensional network level, our work also shows that global synchronization is more resilient to perturbations and injury towards network architecture. Specifically, our conducted numerical experiments imply that plastic delays play a positive role in improving a large-dimensional system’s resilience in achieving synchrony from a sustained injury. Our results provide key insights about the analysis and potential significance of activity-dependent myelination in large-scale brain synchrony.

P111 The impact of noise on the temporal patterning of neural synchronization

Leonid Rubchinsky1, Joel Zirkle2

1Indiana University Purdue University Indianapolis and Indiana University School of Medicine, Department of Mathematical Sciences and Stark Neurosciences Research Institute, Indianapolis, United States of America; 2Indiana University Purdue University Indianapolis, Department of Mathematical Sciences, Indianapolis, United States of America

Correspondence: Leonid Rubchinsky (lrubchin@iupui.edu)

BMC Neuroscience 2020, 21(Suppl 1):P111

Neural synchrony in the brain is often present in an intermittent fashion, i.e. there are intervals of synchronized activity interspersed with intervals of desynchronized activity. A series of experimental studies showed that the temporal patterning of neural synchronization may be very specific, exhibiting predominantly short (although potentially numerous) desynchronized episodes [1], and may be correlated with behavior (even if the average synchrony strength is not changed) [2-4]. Prior computational neuroscience research showed that a network with many short desynchronized intervals may be functionally different than a network with few long desynchronized intervals [5]. In this study, we investigated the effect of noise on the temporal patterns of synchronization. We employed a simple network of two conductance-based neurons that were mutually connected via excitatory synapses. The resulting dynamics of the network was studied using the same time-series analysis methods used in prior experimental and computational studies. It has been well known that synchrony strength degrades with noise. We found that noise also affects the temporal patterning of synchrony. Increase in the noise level promotes dynamics with predominantly short desynchronizations. Thus, noise may be one of the mechanisms contributing to the short desynchronization dynamics observed in multiple experimental studies.

Acknowledgements: This work was supported by NSF grant DMS 1813819.

References

  1. 1.

    Ahn S, Rubchinsky LL. Short desynchronization episodes prevail in synchronous dynamics of human brain rhythms. Chaos. 2013; 23; 013138.

  2. 2.

    Ahn S, Rubchinsky LL, Lapish CC. Dynamical reorganization of synchronous activity patterns in prefrontal cortex - hippocampus networks during behavioral sensitization. Cerebral Cortex. 2014; 24: 2553-2561.

  3. 3.

    Ahn S, Zauber SE, Worth RM, Witt T, Rubchinsky LL. Neural synchronization: average strength vs. temporal patterning. Clinical Neurophysiology. 2018; 129: 842-844.

  4. 4.

    Malaia E, Ahn S, Rubchinsky LL. Dysregulation of temporal dynamics of synchronous neural activity in adolescents on autism spectrum. Autism Research. 2020; 13: 24-31.

  5. 5.

    Ahn S, Rubchinsky LL. Potential mechanisms and functions of intermittent neural synchronization. Frontiers in Computational Neuroscience. 2017; 11: 44.

P112 Comparing Drosophila neural circuit models with FlyBrainLab

Aurel A Lazar, Tingkai Liu, Mehmet K Turkcan, Yiyin Zhou

Columbia University, Electrical Engineering, New York, United States of America

Correspondence: Mehmet K Turkcan (mkt2126@columbia.edu du)

BMC Neuroscience 2020, 21(Suppl 1):P112

In recent years, a wealth of Drosophila neuroscience data have become available. These include cell type, connectome and synaptome datasets for both the larva and adult fly [1-4]. To facilitate integration across data modalities and to accelerate understanding the functional logic of the fly brain, we developed an interactive computing environment called FlyBrainLab [5].

FlyBrainLab brings together tools enabling the morphological visualization and exploration of large connectomics datasets, interactive circuit construction and visualization, multi-GPU execution of neural circuit models for in silico experimentation, and libraries to aid data analysis. FlyBrainLab provides the flexibility to readily navigate between the in vivo circuits and executable in silico circuits. Moreover, FlyBrainLab methodologically supports the efficient comparison of fly brain circuit models, either across model instances developed by different researchers, or across different developmental stages of the fruit fly. We provide two example comparisons below.

First, we constructed a wild-type biological central complex (CX) circuit followed by a corresponding interactive baseline CX circuit diagram (Fig. 1, left). This enabled us to automatically map three CX circuit models described in the literature into executable circuits (Fig. 1, middle-right). With these circuits on the same platform, we devised the same set of inputs to the CX models and an evaluation criterion. The evaluation of the function of the executable circuits revealed the differences in modeling assumptions and the less visible details underlying the mapping between neuroanatomy, neurocircuitry and computation in the executable circuits. Based on these and other comparisons, new executable CX circuit models can be developed, evaluated and scrutinized by the research community.

Fig. 1
figure40

From the wild-type biological CX circuit and its circuit diagram (left column), two additional models of the CX are instantiated and compared (middle and right columns)

Second, by integrating biological data across adult and larval flies we developed executable models for early olfactory systems in both developmental stages. Circuit models are implemented for the Antenna, the Antennal Lobe and the Mushroom Body, and are interactively configurable. We evaluated the I/O characteristics of adult and larva, and discovered significant differences in odorant encoding capabilities. Noticeably, the adult excels in disparately representing differing odorant identities while requiring 10x higher neural hardware and as such much higher energy consumption. Such tradeoff between computation and energy requirement suggests a general principle of adaptation to environmental niches which we are currently exploring.

Acknowledgments: The research reported here was supported by AFOSR under grant #FA9550-16-1-0410 and DARPA under contract #HR0011-19-9-0035.

References

  1. 1.

    Chiang AS, et al. Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution. Current Biology. 2011; 21(1): 1-1.

  2. 2.

    Takemura SY, et al. Synaptic circuits and their variations within different columns in the visual system of Drosophila. Proceedings of the National Academy of Sciences. 2015; 112(44): 13711-6.

  3. 3.

    Berck ME, et al. The wiring diagram of a glomerular olfactory system. Elife. 2016; 5: e14859.

  4. 4.

    Xu CS, et al. A connectome of the adult drosophila central brain. BioRxiv. 2020.

  5. 5.

    Turkcan MK, et al. FlyBrainLab: an interactive computing environment for the fruit fly brain. Society of Neuroscience. 2019.

P113 Dynamically damped stochastic alpha-band relaxation activity in 1/f noise and alpha blocking in resting M/EEG

Rick Evertz1, Damien Hicks2, David Liley3

1Swinburne University of Technology, Melbourne, Australia; 2Centre for Human Psychopharmacology, Optical Sciences Centre, Melbourne, Australia; 3The University of Melbourne, Department of Medicine, Melbourne, Australia

Correspondence: Rick Evertz (revertz@swin.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P113

Dynamical and physiological basis of alpha band activity and 1/f noise is a subject of continued speculation. Here we conjecture, on the basis of empirical data analysis, that both of these features can be dynamically unified if resting EEG is conceived of being the sum of multiple stochastically perturbed alpha band oscillatory relaxation processes. The modulation of alpha-band and 1/f noise activity by dynamic damping is explored in eyes closed (EC) and eyes open (EO) resting state Magneto/Electroencephalography (M/EEG). We assume that the resting M/EEG being recorded is composed of a superposition of stochastically perturbed alpha-band relaxation processes with a distribution of dampings, the functional form of which is unknown. We perform the inverse problem and take measured M/EEG power spectra and compute the distribution of dampings using Tikhonov regularization methods. The characteristics of the damping distribution are examined across subjects, sensors and recording condition (EC/EO).

We find that there are robust changes in the estimated damping distribution between EC/EO recording conditions across participants. Our findings suggest that alpha-blocking and the 1/f noise structure are both explicable through a singular process of dynamically damped alpha-band activity. The estimated damping distributions are typically found to be bimodal or trimodal (Fig. 1). The number and position of the modes is related to the sharpness of the alpha resonance (amplitude, FWHM) and the slope of the power spectrum. The results suggest that there exists an intimate relationship between resting state alpha activity and 1/f noise with changes in both governed by changes to the damping of the underlying alpha relaxation processes. In particular, alpha-blocking is observed to be the result of the most weakly damped distribution mode (peak at 0.4 - 0.6s^-1) becoming more heavily damped (peak at 1.0 - 1.5s^-1). Reductions in the slope of the 1/f noise are the result of the alpha relaxation processes becoming more broadly distributed in their respective dampings with more weighting towards heavily damped alpha activity. The results suggest a novel way of characterizing resting M/EEG power spectra and provides new insight into the central role that damped alpha-band activity may play in the interesting spatio-temporal features of resting state M/EEG.

Fig. 1
figure41

Top panel: Power spectrum for EC and EO (dashed) conditions plotted in logarithmic and linear coordinates. Power spectrum generated using the estimated damping distributions in the forward problem are shown for EC and EO (solid) alongside the measured power spectra. Bottom panel: Typical damping distribution results found in EC and EO for subjects who demonstrate alpha blocking

Future work will explore the more complex case where we expect a distribution over both frequency and damping for the stochastic relaxation processes, elucidating any frequency dependent damping effects between conditions. The inverse problem can be solved via gradient descent methods where we estimate the 2-dimensional probability density function over frequency and damping from a given power spectrum.

P114 Effect of diverse recoding of granule cells on delay eyeblink conditioning in a cerebellar network

Sang-Yoon Kim, Woochang Lim

Daegu National University of Education, Institute for Computational Neuroscience and Department of Science Education, Daegu, South Korea

Correspondence: Woochang Lim (wclim@icn.re.kr)

BMC Neuroscience 2020, 21(Suppl 1):P114

We consider a ring network for the delay eyeblink conditioning, and investigate the effect of diverse firing activities of granule (GR) cells on the eyeblink conditioning under conditioned stimulus (tone) by varying the connection probability pc from Golgi to GR cells. For an optimal value of p*c, individual GR cells exhibit diverse spiking patterns which are well- or poor-matched with the unconditioned stimulus (airpuff). Then, these diversely-recoded signals via parallel-fibers (PFs) from GR cells are effectively depressed by the error teaching signals via climbing fibers (CFs) from the inferior olive. Synaptic weights at well-matched PF–Purkinje cell (PC) synapses of active GR cells are strongly depressed via strong long-term depression (LTD), while no LTD occurs at poor-matched PF–PC synapses. This kind of “effective” depression at PF-PC synapses coordinates firings of PCs effectively, which then exert effective inhibitory coordination on cerebellar nucleus (CN) (which evokes conditioned response (CR; eyeblink)). When the learning trial passes a threshold, CR occurs. In this case, the timing degree Td becomes good due to presence of poor-matched spiking group which plays a role of protection barrier for the timing. With further increase in trials, strength of CR SCR increases due to strong LTD in the well-matched spiking group, while its timing degree decreases. Thus, the overall efficiency degree Le (taking into consideration both timing and strength of CR) for the eyeblink increases with trials, and eventually saturates. By changing pc, we also investigate the delay eyeblink conditioning and find that a plot of Le versus pc forms a bell-shaped curve with a peak at p*c (where the diversity degree D in firing of GR cells is also maximum). The more diverse in spiking patterns of GR cells, the more effective in CR for the eyeblink.

Fig. 1
figure42

a Diversity degree D versus the connection probability pc from Golgi cells to granule cells. b1 Timing degree Td versus pc, b2 strength of conditional response SCR versus pc, and b3 overall efficiency degree Le versus pc

P115 Approximating information filtering of a two-stage neural system

Gregory Knoll, Žiga Bostner, Benjamin Lindner

Humboldt-Universitaet zu Berlin, Physics, Berlin, Germany

Correspondence: Gregory Knoll (gregory.knoll@bccn-berlin.de)

BMC Neuroscience 2020, 21(Suppl 1):P115

Information streams are processed in the brain by populations of neurons tuned to perform specific computations, the results of which are forwarded to subsequent processing stages. Building on theoretical results for the behavior of single neurons and populations, we investigate the extent to which a postsynaptic cell (PSC) can detect the information present in the output stream of a population which has encoded a signal. In this two-stage system, illustrated in Figure 1A, the population is a simple feedforward network of integrate-and-fire neurons which integrate and relay the signal, reminiscent of auditory or electroreceptor afferents in the sensory periphery. Depending on the application, the information relevant for the PSC may be contained in a specific frequency band of the stimulus, requiring the PSC to properly tune its information encoding to that band (information filtering). In the specific setup studied here, information filtering is associated with detecting synchronous activity. It was found that synchronous activity of a neural population selectively encodes information about high-frequency bands of a broadband stimulus, and it was hypothesized that this information can be read out by coincidence detector cells that are activated only by synchronous input. Firstly, we test this hypothesis and match the key characteristics of information filtering, the spectral coherence function, of the PSC and the stimulus and of the time-dependent synchrony in the population output and the stimulus (Fig. 1B, left); we show that the relations between the synchrony and PSC thresholds and between the synchrony window and PSC time constant are roughly linear (Fig. 1B, right), which implies that the synchronous output of the population can be taken as a proxy for the postsynaptic coincidence detector and, conversely, that the PSC can be made to detect synchrony (or coincidence) by adjusting its time constant and threshold. Secondly, we develop an analytical approximation for the coherence function of the PSC and the stimulus and demonstrate its accuracy by comparison against numerical simulations (Fig. 1C), both in the fluctuation-dominated and mean-driven regimes of the PSC.

Fig. 1
figure43

A System diagram of the two-stage neural system. B Matching coherence functions to relate synchronous output criteria to PSC parameters. C Band-pass filtering of information from a broadband stimulus by the two-stage neural system

P116 Astrocyte simulations with realistic morphologies reveal diverse calcium transients in soma and processes

Ippa Seppala1, Laura Keto2, Iiro Ahokainen1, Nanna Förster1, Tiina Manninen2, Marja-Leena Linne1

1Tampere University, Medicine and Health Sciences, Tampere, Finland; 2Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland

Correspondence: Ippa Seppala (ippa.seppala@tuni.fi)

BMC Neuroscience 2020, 21(Suppl 1):P116

The role of astroglia has long been overlooked in the field of computational neuroscience. Lately their involvement in multiple higher-level brain functions, including neurotransmission, plasticity, memory, and neurological disorders, has been found to be more significant than previously thought. It has been hypothesised that astrocytes fundamentally affect the information processing power of the mammalian brain. As the glia to neuron ratio increases when moving from simpler organisms to those more complex, it is clear that more attention should be directed to glial involvement. Despite the recent advances in neuroglial research there still exists a lack of glia-specific computational tools. Astroglia differ considerably from neurons in their morphology as well as their biophysical functions [1], making it difficult to acquire reliable simulation results with simulators made for studying neuronal behaviour. As the differences in cellular dynamics of astrocytes compared to those of neurons are significant, there clearly exists a need for tailored methods for simulating the behaviour of glial cells.

One such astrocyte specific simulator has been developed [2]. In simulations ASTRO uses MATLAB and NEURON environments [3] and is capable of representing various biologically relevant astroglial mechanisms such as calcium waves and diffusion. In this work we used ASTRO to simulate several astrocytic functions with the help of existing in vivo morphologies from various brain areas. We concentrated on calcium transients, as calcium-mediated signaling is thought to be the main mechanism of intra- and intercellular messaging between astroglia and other neural cell types. The time-scales of these calcium-mediated events have recently been shown to differ considerably in different spatial locations of astrocytes. We were able to reproduce these results in silico by simulating a morphologically detailed computational model that we developed based on previous work [4,5]. This was partly due to ASTRO’s capability to analyse the microscopic calcium dynamics in fine processes, branches and leaves.

With our model ASTRO proved to be a promising tool in simulating astrocytic functions and could potentially offer novel insights to glia-neuron interactions also in future work.

Acknowledgements: The work was supported by Academy of Finland through grants (297893, 326494, 326495) and the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2).

References

  1. 1.

    Calì C, et al. 3D cellular reconstruction of cortical glia and parenchymal morphometric analysis from Serial Block-Face Electron Microscopy of juvenile rat. Progress in Neurobiology. 2019;183: 101696.

  2. 2.

    Savtchenko LP, et al. Disentangling astroglial physiology with a realistic cell model in silico. Nature Communications. 2018; 9(1): 1-15.

  3. 3.

    Carnevale T. Neuron simulation environment. Scholarpedia. 2007; 2(6): 1378.

  4. 4.

    Manninen T, Havela R, Linne M-L. Reproducibility and comparability of computational models for astrocyte calcium excitability. Frontiers in Neuroinformatics. 2017; 11: 11.

  5. 5.

    Manninen T, Havela R, Linne M-L. Computational models for calcium-mediated astrocyte functions. Frontiers in Computational Neuroscience. 2018; 12: 14.

P117 The interplay of neural mechanisms regulates spontaneous cortical network activity: inferring the role of key mechanisms using data-driven modeling

Jugoslava Acimovic, Tiina Manninen, Heidi Teppola, Marja-Leena Linne

Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland

Correspondence: Jugoslava Acimovic (jugoslava.acimovic@tuni.fi)

BMC Neuroscience 2020, 21(Suppl 1):P117

In isolated neural systems devoid of external stimuli, the exchange between neuronal, synaptic and putatively also glial mechanisms gives rise to spontaneous self-sustained synchronous activity. This phenomenon has been extensively documented in dissociated cortical cultures in vitro that are routinely used to study neural mechanisms in health and disease. We examine these mechanisms using a new data-driven computational modeling approach. The approach integrates standard spiking network models, non-standard glial mechanisms and network-level experimental data.

The experimental data represents spontaneous activity in dissociated rat cortical cultures recorded using microelectrode arrays. The recordings were performed under several experimental protocols that involved pharmacological manipulation of network activity. Under each protocol the activity exhibited characteristic network bursts, the short intervals (100ms to 1s) of intensive network-wide spiking interleaved by longer (~10s) periods of sparse uncorrelated spikes. The data was analysed to extract, among other properties, duration, intensity and frequency of burst events [1].

The computational model incorporates fast burst propagation and decay mechanisms, as well as the slower burst initiation mechanisms. We first constructed the fast part of the model as a generic spiking neuronal network and optimized it to the experimental data describing intra-burst properties. We developed a model fitting routine relying on multi-objective optimization [2]. The optimized ‘fast’ model was then extended with a selected astrocytic mechanism operating on a similar time-scale as the network burst initiation [3]. Typically, the burst initiation is attributed to a combination of noisy inputs and the dynamics of neuronal (and synaptic) adaptation currents. While noise provides necessary depolarization of cell membrane the adaptation currents prohibit fast initiation of the next burst event. The noise might account for the randomness in ion channel opening and closing, the spontaneous synaptic release and other sources of randomness. The adaptation accounts for the kinetics of various ion channels. We explore the role of a non-standard deterministic mechanism introduced through slow inward current from astrocytes to neurons.

We demonstrate that the fast neuronal part of the model successfully reproduces intra-burst dynamics, including the duration and intensity of network bursts. The model is flexible enough to account for several experimental conditions. Coupled to the slower astrocyte-neuron interaction mechanism the system becomes capable of generating bursts with the frequency proportional to the one seen in vitro.

Acknowledgements: This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907 (Human Brain Project SGA2). The funding has also been received from the Academy of Finland through grants No. 297893, 326494, 326495.

References

  1. 1.

    Teppola H, Aćimović J, Linne ML. Unique features of network bursts emerge from the complex interplay of excitatory and inhibitory receptors in rat neocortical networks. Frontiers in Cellular Neuroscience. 2019; 13: 377.

  2. 2.

    Aćimović J, Teppola H, Mäki-Marttunen T, Linne M-L. Data-driven study of synchronous population activity in generic spiking neuronal networks: How much do we capture using the minimal model for the considered phenomena? BMC Neuroscience 2018;19(2): 136.

  3. 3.

    Aćimović J, et al. Modeling the influence of neuron-astrocyte interactions on signal transmission in neuronal networks. BMC Neuroscience 2019; 20(1): 178.

P118 Compartmental models for mammalian cortical pyramidal neurons: a survey of published models, model complexity and parameter sensitivity

Emma Huhtala1, Jugoslava Acimovic2, Mikko Lehtimäki2, Marja-Leena Linne3

1Tampere University, Tampere, Finland; 2Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland; 3Tampere University, Medicine and Health Sciences, Tampere, Finland

Correspondence: Emma Huhtala (emma.huhtala@tuni.fi)

BMC Neuroscience 2020, 21(Suppl 1):P118

Pyramidal neurons are abundant in the neocortex and known to contribute to diverse and complex cognitive functions. To better understand the role of individual pyramidal cell types we need neuron models that can be efficiently and reliably simulated and analyzed when embedded into circuit-level models yet that maintain the necessary structural and functional complexity. In this study we contribute to this goal the following ways: 1) We review and compare over 50 published mammalian cortical pyramidal neuron models available in the literature and public repositories (with the focus on models implemented in the NEURON simulation environment). 2) We test two recently published tools that tackle critical issues for detailed neuron modeling, the role of model complexity and size and the sensitivity to numerous and occasionally unreliable model parameters.

In goal 1, we compared the models based on the following criteria: the brain area and layer, number of compartments, biophysical properties, software used to simulate the model, and the amount of experimental data used to construct and fine-tune the model. Based on this work, we chose a layer 2/3 pyramidal cell model from the rat somatosensory cortex, acquired from the Blue Brain Project data portal [1], as our test case. We used two recently published toolboxes, Neuron_Reduce and Uncertainpy, to analyze the model. Neuron_Reduce [2] is designed for simplifying the morphology of the model while replicating the model dynamics and accelerating the simulations considerably. Uncertainpy [3] allows the user to examine the impact of uncertainty and sensitivity of the model parameters.

The analyses carried out in this work show that Neuron_Reduce toolbox is an effective tool for simplifying the morphological structure of neuron models. With moderate reduction of the model (preserving 10-25% of the original model compartments) the Neuron_Reduce toolbox simplifies the dendritic structure of the cell while replicating the behavior of the original model. However, a dramatic reduction (preserving 3% of the original model compartments) led to changes in the shape of action potentials. The results obtained with Uncertainpy suggest that the reduced model shows sensitivity to only a subset of model parameters among those that affect the original model. For example, while the original model depends on several somatic conductances, the reduced model is sensitive mainly to sodium and potassium channel conductances. Based on our testing, both toolboxes will be useful tools for analyzing models in neuroscience. In addition, they can help the re-use of compartmental models in new modeling initiatives, particularly when modeling multiple spatiotemporal scales of the brain phenomena.

Acknowledgments: This work was supported by the Academy of Finland (decision Nos 297893 and 318879).

References

  1. 1.

    Blue Brain Portal – Digitally reconstructing and simulating the brain [Online]. Available: https://portal.bluebrain.epfl.ch/. [Accessed: 12-May-2020]

  2. 2.

    Amsalem O, et al. An efficient analytical reduction of detailed nonlinear neuron models. Nature Communications. 2020; 11(1): 1-3.

  3. 3.

    Tennøe S, Halnes G, Einevoll GT. Uncertainpy: A Python toolbox for uncertainty quantification and sensitivity analysis in computational neuroscience. Frontiers in Neuroinformatics. 2018; 12: 49.

P119 Relating transfer entropy to network structure and motifs, and implications for brain network inference

Leonardo Novelli, Joseph Lizier

The University of Sydney, Centre for Complex Systems, Sydney, Australia

Correspondence: Leonardo Novelli (lnov6504@uni.sydney.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P119

Transfer entropy is an established method for the analysis of directed relationships in neuroimaging data. In its original formulation, transfer entropy is a bivariate measure, i.e., a measure between a pair of elements or nodes [1]. However, when two nodes are embedded in a network, the strength of their direct coupling is not sufficient to fully characterize the transfer entropy between them. This is because transfer entropy results from network effects due to interactions between all the nodes.

In this theoretical work, we study the bivariate transfer entropy as a function of network structure, when the link weights are known. In particular, we use a discrete-time linear Gaussian model to investigate the contribution of small motifs, i.e., small subnetwork configurations comprising two to four nodes. Although the linear model is simplistic, it is widely used and has the advantage of being analytically tractable. Moreover, using this model means that our results extend to Granger causality, which is equivalent to transfer entropy for Gaussian variables.

We show analytically that the dependence of transfer entropy on the direct link weight is only a first approximation, valid for weak coupling. More generally, the transfer entropy increases with the in-degree of the source and decreases with the in-degree of the target, which suggests an asymmetry of information transfer between hubs and peripheral nodes.

Importantly, these results also have implications for directed functional network inference from time series, which is one of the main applications of transfer entropy in neuroscience. The asymmetry of information transfer suggests that links from hubs to peripheral nodes would generally be easier to infer than links between hubs, as well as links from peripheral nodes to hubs. This could bias the estimation of network properties such as the degree distribution and the rich-club coefficient.

In addition to the dependence on the in-degree, the transfer entropy is directly proportional to the weighted motifs involving common parents or multiple walks from the source to the target (Fig. 1). These motifs are more abundant in clustered or modular networks than in random networks, suggesting a higher transfer in the former case. Further, if the network has only positive edge weights, we have a positive correlation to the number of such motifs. This applies in the mammalian cortex (on average, since the majority of connections are thought to be excitatory) – implying that directed functional network inference with transfer entropy is better able to infer links within brain modules (where such motifs enhance transfer entropy values) in comparison to links across modules.

Fig. 1
figure44

Network motifs involved in the bivariate transfer entropy from node X to node Y

Reference

  1. 1.

    Schreiber T. Measuring information transfer. Physical review letters. 2000; 85(2): 461.

P120 Ageing-related changes in prosocial reinforcement learning and links to psychopathic traits

Jo Cutler1, Marco Wittmann1, Ayat Abdurahman2, Luca Hargitai1, Daniel Drew1, Masud Husain1, Patricia Lockwood1

1University of Oxford, Psychology, Oxford, United Kingdom; 2University of Cambridge, Psychology Department, Cambridge, United Kingdom

Correspondence: Jo Cutler (jo.cutler@psy.ox.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P120

Prosocial behaviours, actions that help others, are vital for maintaining social bonds and are linked with improved health. However, our ability to learn which of our actions help others could change as we get older, as existing studies suggest declines in reinforcement learning across the lifespan [1]. This decline in associative learning could be explained by the significant age-related decrease in dopamine transmission [2] which has been suggested to code prediction errors [3]. Alternatively, prosocial learning might not only rely on learning abilities but also on the motivation to help others. This motivation, which is reduced in disorders such as psychopathy, might also shift with age, with a trend for lower levels of antisocial behaviour in older adults [4]. Interestingly, the decrease in dopamine levels in older adults could also support this hypothesis of increased prosociality, as higher dopamine has been linked to lower altruism [5].

Here, using computational modelling of a probabilistic reinforcement learning task (Fig. 1), we tested whether younger (age 18-36) and older (age 60-80, total n=152) adults can learn to gain rewards for themselves, another person (prosocial), or neither individual (control). We replicated existing work showing younger adults were faster to learn when their actions benefitted themselves, compared to when they helped others [6]. Strikingly however, older adults showed a reduced self-bias, compared to younger adults, with learning rates that did not significantly differ between self and other. In other words, older adults showed a relative increase in the willingness to learn about actions that helped others. Moreover, we find that these differences in prosocial learning could emerge from more basic changes in personality characteristics over the lifespan. In older adults, psychopathic traits were significantly reduced and correlated with the difference between prosocial and self learning rates. Importantly, the difference between self and other learning rate was most reduced in older people with the lowest psychopathic traits. Overall, we show that older adults are less self-biased than younger adults, and this change is associated with a decline in psychopathic traits. These findings highlight the importance of examining individual differences across development and have important implications for theoretical and neurobiological accounts of healthy ageing.

Fig. 1
figure45

Behavioural task and data. A Reinforcement learning task: participants played for either themselves, the other participant, or no one. B Group-level learning curves showing choice behaviour in the three learning conditions for each age group. C Comparison of learning rates from the computational model. D Median difference between learning rates in the other and self conditions. Asterisks represent significant differences (p<.05)

References

  1. 1.

    Mell T, Heekeren HR, Marschner A, Wartenburger I, Villringer A, Reischies FM. Effect of aging on stimulus-reward association learning. Neuropsychologia. 2005; 43(4): 554-563.

  2. 2.

    Li S-C, Lindenberger U, Bäckman L. Dopaminergic modulation of cognition across the life span. Neuroscience and Biobehavioral Reviews. 2010; 34(5): 625-630.

  3. 3.

    Schultz W. Dopamine reward prediction error coding. Dialogues in Clinical Neuroscience. 2016; 18(1): 23-32.

  4. 4.

    Gill DJ, Crino RD. The Relationship between Psychopathy and Age in a Non-Clinical Community Convenience Sample. Psychiatry, Psychology and Law. 2012; 19(4): 547-557.

  5. 5.

    Crockett MJ, et al. Dissociable effects of serotonin and dopamine on the valuation of harm in moral decision making. Current Biology. 2015; 25(14): 1852-9.

  6. 6.

    Lockwood PL, Apps MAJ, Valton V, Viding E, Roiser JP. Neurocomputational mechanisms of prosocial learning and links to empathy. Proceedings of the National Academy of Sciences. 2016; 113(35): 9763-9768.

P121 Avalanches and emergent activity patterns in simulated mouse primary motor cortex

Donald Doherty, Salvador Dura-Burnal, William W Lytton

SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, New York, United States of America

Correspondence: Donald Doherty (donald.doherty@actionpotential.com)

BMC Neuroscience 2020, 21(Suppl 1):P121

Avalanches display non-Poisson distributions of correlated neuronal activity that may play a significant role in signal processing. The phenomenon appears robust but mechanisms remain unknown due to an inability to gather large sample sizes, and difficulties in identifying neuron morphology, biophysics, and connections underlying avalanche dynamics. We set out to understand the relationship between power-law activity patterns, their values, and the neural responses observed from every neuron across different layers, cell populations, and the entire cortical column using a detailed M1 model with 15 neuron types that simulated the full-depth of a 300μm diameter column with 10,073 neurons and ~18e6 connections. Self-organized and self-sustained activity from our simulations have power-law values of -1.51 for avalanche size and -1.98 for duration distributions, which are in the range noted in both in vitro and in vivo neural avalanche preparations reported by Beggs and Plentz (2003). We applied a 0.57 nA, 100ms stimulus across 40μm in diameter and full column depth at each of 49 gridded locations (40μm) across the pia surface of our 400μm diameter cylindrical cortical column. Stimuli applied to 4 locations (8.2%) produced no sustained responses. Self-sustained activity was seen in the other 45 locations, which always included activity in IT5B or IT5B and IT6. In 6 locations activity was restricted to IT5B or IT5B/IT6 alone (avalanche size: ~ -2.8). Intermittent spread of activity from IT5B/IT6 across other neuron types and layers was seen in 24 locations (avalanche size: ~ -2.0). In 15 locations, frequent spread of activity to other neuron types and layers was observed (avalanche size: ~ -1.5). Avalanches were defined using binned spiking activity (1ms bins). Each avalanche was composed of adjacent bins filled with one or more action potentials, preceded and followed by at least one empty bin. A prolonged 10-minute M1 simulation with different connectivity produced 15,579 avalanches during sustained activity after the initial 100ms stimulation. Again, IT5B/IT6 activity was constant and punctuated by more widespread activity. Three distinct patterns of activity spontaneously recurred and could be characterized by delta, beta, or gamma frequency dominance. All large-scale avalanches were composed of 1 or a combination of these 3 recognizable patterns. Between the large-scale avalanches we saw three patterns of activity: 1) continuous IT5B and IT6 neuron activity, 2) vigorous layer 5 and IT6 activity, or 3) vigorous layer 5 and IT6 activity that transitioned to continuous IT5B and IT6 activity. Since cortical column activity with just IT5B and IT6 activity showed little correlation (very steep and narrow distributions of avalanche sizes and durations), we hypothesize that the addition of avalanches with layer 5 and IT6 activity (activity patterns 2 and 3 above) result in more correlated activity and power-law values closer to -1.51 and -1.98 for size and duration respectively. In conclusion, the increase in correlated activity among neuronal components parallels the emergence of clearly identifiable activity patterns across time and cortical layers and may generate rhythmic activity.

Fig. 1
figure46

Forty-nine raster plots each showing 60 seconds of activity across all 10,073 neurons after a 100 millisecond 0.57 nano-amperes current was applied to a volume of the cortical column 40 um in diameter and spanning the full 1,350 um depth from pia to white matter

P122 Adaptive activity-dependent myelination promotes synchronization in large-scale brain networks

John Griffiths1, Jeremie Lefebvre2, Rabiya Noori3, Daniel Park4, Sonya Bells5, Paul Frankland5, Donald Mabbott5

1Centre for Addiction and Mental Health, Krembil Centre for Neuroinformatics, Toronto, Canada; 2University of Ottawa, Department of Biology, Ottawa, Canada; 3Krembil Research Institute, Toronto, Canada; 4University of Toronto, Toronto, Canada; 5The Hospital for Sick Children, Neurosciences and Mental Health, Toronto, Canada

Correspondence: Jeremie Lefebvre (jeremie.lefebvre@uottawa.ca)

BMC Neuroscience 2020, 21(Suppl 1):P122

Communication and oscillatory synchrony between distributed neural populations is believed to play a key role in multiple cognitive and neural functions. These interactions are mediated by long-range myelinated axonal fibre bundles, collectively termed as white matter. While traditionally considered to be static after development, white matter properties have been shown to change in an activity-dependent way through learning and behavior: a phenomenon known as white matter plasticity. In the central nervous system this plasticity stems from oligodendroglia, which form myelin sheaths to regulate the conduction of nerve impulses across the brain, hence critically impacting neural communication. We here shift the focus from neural to glial contribution to brain synchronization and examine the impact of adaptive, activity-dependent change in conduction velocity on the large-scale phase-synchronization of neural oscillators.

We used a network model built of reciprocally coupled Kuramoto phase oscillators whose connections are based on available primate large-scale white matter neuroanatomy data. Our computational and mathematical results show that such adaptive plasticity endows white matter networks with self-regulatory and self-organizing properties, where conduction delay statistics are autonomously adjusted to ensure efficient neural communication. Specifically, our analysis shows that adaptive conduction velocities along axonal connections stabilizes oscillatory neural activity across a wide range of connectivity gain and frequency bands. Resulting conduction delays become statistically similar, promoting phase-locking irrespective of the distances. As a corollary, global phase-locked states are more resilient to diffuse decreases in connectivity, reflecting damage caused by a neurological disease, for instance. Our work suggests that adaptive myelination may be a mechanism that enable brain networks with a means of temporal self-organization, resilience and homeostasis.

P123 Computational implementation in NEURON of inter-cellular calcium waves in syncytial tissue

Nilapratim Sengupta, Rohit Manchanda

Indian Institute of Technology Bombay, Department of Biosciences and Bioengineering, Mumbai, India

Correspondence: Nilapratim Sengupta (nilapratim@iitb.ac.in)

BMC Neuroscience 2020, 21(Suppl 1):P123

Calcium plays highly critical roles in various physiological processes such as muscle contraction, neurotransmission, cell growth and proliferation. Intracellular calcium handling mechanisms have been studied extensively. However, concepts of intracellular calcium dynamics must be complemented with the knowledge of calcium spread in tissues where neighbouring cells are coupled to each other forming a syncytium. Intercellular calcium waves (ICW) are ‘complex spatiotemporal events’ essentially comprising of an elevated level of intracellular calcium that appears to spread from the initiating/stimulated cell to the coupled neighbours [1]. Gap junction mediated diffusion has been identified as a crucial mechanism for ICW in syncytial tissues [2].

The complex structure of syncytial tissue, coupled with different signalling molecules and their varied mechanisms of involvement makes it difficult to study ICW from a quantitative point of view using in vitro or in vivo experiments. Though mathematical models describing ICW propagation in two-dimensions exist, there is no report of a biophysical model to account for three dimensional propagation of ICW in vivo in syncytial tissues. A key objective of our work was to realize, using the NEURON platform, a model for 3-D propagation. Several computational labs (primarily working on neural networks) make use of this platform to build models. However to our knowledge, in none of the existing cellular network models has chemical coupling of cells been incorporated. Successful implementation of our developed technique would produce a biophysically detailed model incorporating structural, electrical as well as chemical aspects that could be used to upgrade all existing models once suitable changes in parameters are made.

Gap junctions, in the NEURON platform, have usually been modelled as low resistance electrical shunts between connected cells [3]. In a novel approach we modelled the gap junction such as to enable it to transfer calcium between connected cells, based on the ion’s electro-chemical gradient. We have equipped the detrusor smooth muscle cell model with intracellular calcium handling mechanisms and then modified the gap junctional connection to incorporate intercellular calcium flow, besides non-specific current. We have successfully simulated calcium spread from the source cell to its adjoining neighbours and beyond. Within the network, as the distance from the source cell increases, extent of calcium flow diminishes as it propagates due to diffusion, buffering and its being pumped out of the cells (Fig. 1).

Fig. 1
figure47

Calcium transients triggered in connected neighbouring cells due to ICW propagation from stimulated source cell

Our model is in a preliminary stage and is yet to be tuned. Literature pertaining to ICW in detrusor is scant. Hence the model would need to be tuned in terms of overall cellular response, with active ion channels integrated. Subsequently, techniques mimicking messenger regeneration would need to be incorporated, besides passive diffusion.

References

  1. 1.

    Leybaert L, Sanderson MJ. Intercellular Ca2+ waves: mechanisms and function. Physiological Reviews. 2012; 92(3): 1359-92.

  2. 2.

    Sanderson MJ, Charles AC, Dirksen ER. Mechanical stimulation and intercellular communication increases intracellular Ca2+ in epithelial cells. Cell regulation. 1990; 1(8): 585-96.

  3. 3.

    Appukuttan S, Brain KL, Manchanda R. A computational model of urinary bladder smooth muscle syncytium. Journal of Computational Neuroscience. 2015; 38(1): 167-87.

P124 Morphologically-detailed reconstruction of cerebellar glial cells

Laura Keto, Tiina Manninen

Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland

Correspondence: Laura Keto (laura.keto@tuni.fi)

BMC Neuroscience 2020, 21(Suppl 1):P124

The cerebellar circuitry has been modeled widely, with realistic and accurate models now existing for most of the cerebellar neurons [1]. The glial cell population of the cerebellum, however, has been largely neglected by computational modelers. No realistic whole-cell models have been previously implemented for any of the cerebellar glial cell types; oligodendrocytes, microglia, or astroglia. In this work, we were interested in reconstructing a detailed morphology for the best-known cerebellar astroglial cell type, Bergmann glia. Bergmann glia are radial astrocytes of the cerebellar cortex, with somata located at the Purkinje cell layer, 3-6 long processes extending through the molecular layer, and endfeet. The processes give rise to smaller appendages characterized by microdomains that enwrap neuronal synapses [2]. The reciprocal communication between Bergmann glia and the neighboring neurons is vital for development and plasticity of the cerebellum [3].

Currently no reconstructions of cerebellar glial cells are available in public databases. The reconstruction of Bergmann glia required both an astroglial stem tree as well as a more detailed morphology for reconstructing the astroglial nanoscopic architecture. The stem tree was built with the NEURON CellBuilder tool [4] with values found from literature [2,5]. For the nanoscopic architecture, a 3D reconstruction of a Bergmann glial appendage was recreated based on a video file [2] with AgiSoft Metashape and Blender. As the exact reconstruction of the nanoscopic architecture would be computationally unfeasible, a novel computational tool ASTRO [6] was used to define statistical properties from the reconstructed appendage. The final morphology was assembled with ASTRO and verified functionally by simulating microscopic calcium dynamics with the tool.

Acknowledgements: We are very grateful to Prof. Helmut Kettenmann for providing us the video file of Bergmann glia appendage. The work was supported by Academy of Finland (Nos. 326494, 326495).

References

  1. 1.

    D’Angelo E, et al. Modeling the cerebellar microcircuit: New strategies for a long-standing issue. Frontiers in Cellular Neuroscience. 2016; 10: 176.

  2. 2.

    Grosche J, et al. Microdomains for neuron-glia interaction: Parallel fiber signaling to Bergmann glial cells. Nature Neuroscience. 1999; 2(2): 139–143.

  3. 3.

    Bellamy TC. Interactions between Purkinje neurones and Bergmann glia. The Cerebellum. 2006; 5(2): 116–126.

  4. 4.

    Carnevale NT, Hines ML. The NEURON book. Cambridge University Press; 2006.

  5. 5.

    Lippman JJ, Lordkipanidze T, Buell ME, Yoon SO, Dunaevsky A. Morphogenesis and regulation of Bergmann glial processes during Purkinje cell dendritic spine ensheathment and synaptogenesis. Glia. 2008; 56(13): 1463–1477.

  6. 6.

    Savtchenko LP, et al. Disentangling astroglial physiology with a realistic cell model in silico. Nature Communications. 2018; 9: 3554.

P125 A predictive coding model of transitive inference

Moritz Moeller1, Rafal Bogacz2, Sanjay Manohar1

1University of Oxford, Nuffield Department of Clinical Neurosciences, Oxford, United Kingdom; 2University of Oxford, MRC Brain Network Dynamics Unit, Oxford, United Kingdom

Correspondence: Moritz Moeller (moritz.moeller@ndcn.ox.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P125

Transitive inference – deducing that “A is better than C” from the premises “A is better than B” and “B is better than C” – is a basic form of deductive reasoning; both humans and animals are capable of it. However, the mechanism that enables transitive inference is not understood. Partly, this is due to the absence of a concrete, falsifiable formulation of the so-called cognitive explanation of transitive inference (which suggests that subjects combine the facts they observe into a mental model, which they then use for reasoning). In this work, we use the predictive coding method to derive a precise, mathematical implementation of the cognitive explanation of transitive inference (Fig. 1A shows a schematic representation of the model we use). We test our model by simulating a set of typical transitive inference experiments and show that it reproduces several phenomena observed in animal experiments. For example, our model reproduces the gradual acquisition of premise pairs (A > B, B > C) and the simultaneously emerging capability for transitive inference (A>C) (Fig. 1B). We expect this work to lead to novel testable predictions that will inspire future experiments and help to uncover the mechanism behind transitive inference. Further, our work adds support to predictive coding as a universal organising principle of brain function.

Fig. 1
figure48

A Schematic representation of the predictive coding network used to model transitive inference. B Model predictions of pairwise value differences in a hierarchy stimuli A > B > C. As the premises A > B and B > C are learned, the model’s evaluations become consistent with the hierarchy

P126 Topological Bayesian signal processing with application to EEG

Christopher Oballe1, Alan Czerne1, David Boothe2, Scott Kerick2, Piotr Franaszczuk2, Vasileios Maroulas1

1University of Tennessee, Department of Mathematics, Knoxville, Tennessee, United States of America; 2U.S. Army Research Laboratory, Human Research Engineering Directorate, Aberdeen Proving Ground, Maryland, United States of America

Correspondence: Piotr Franaszczuk (pfranasz@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P126

Electroencephalography is a neuroimaging technique that works by monitoring electrical activity in the brain. Electrodes are placed on the scalp and local changes in voltage are measured over time to produce a collection of time series known as electroencephalograms (EEGs). Traditional signal processing metrics, such as power spectral densities (PSDs), are generally used to analyze EEG since frequency content of EEG is associated with different brain states. Conventionally, PSD estimates are obtained via discrete Fourier transforms. While this method effectively detects low-frequency components because of their high powers, high-frequency activity may go unnoticed because of its relatively weaker power. We employ a topological Bayesian approach that successfully captures even these low-power, high-frequency components of EEG.

Topological data analysis encompasses a broad set of techniques that investigate the shape of data. One of the predominant tools in topological data analysis is persistent homology, which creates topological descriptors called persistence diagrams from datasets. In particular, persistent homology offers a novel technique for time series analysis. To motivate our use of persistent homology to study frequency content of signals, we establish explicit links between features of persistence diagrams, like cardinality and spatial distributions of points, to those of the Fourier series of deterministic signals, specifically the location of peaks and their relative powers. The topological Bayesian approach allows for quantification of these cardinality and spatial distributions by modelling persistence diagrams as marked Poisson point processes.

We test our Bayesian topological method to classify synthetic EEG. We employ three common classifiers: linear regression and support vector machines with linear and radial kernels, respectively. We simulate synthetic EEG with an autoregressive (AR) model, which works by recasting a standard AR model as linearly filtered white noise, enabling straightforward computation of PSDs. The AR model allows us to control the location and width of peaks in PSDs. With this model in hand, we create five classes of signals with peaks in their PSDs at zero to simulate the approximate 1/f behavior of EEG PSDs, four of which also have oscillatory components at 6 Hz (theta), 10 Hz (alpha), 14 Hz (low beta), and 21 Hz (high beta); the fifth class (null) lacks any such component. We repeat this process for two different widths of peaks, narrow (4 Hz) and wide (32 Hz). With data in hand, we extract features using periodograms, persistence diagrams, and our Bayesian topological method, then independently use these features in classification for the wide and narrow width cases. Preliminarily, while both the Bayesian topological method and periodogram features obtain near perfect for the narrow peak case, the Bayesian topological method outperforms the periodogram features over all tested classifiers in the wide peak case.

P127 A quantification of cross-frequency coupling via topological methods

Alan Cherne1, Christopher Oballe1, David Boothe2, Melvin Felton2, Piotr Franaszczuk2, Vasileos Maroulas1

1University of Tennessee, Department of Mathematics, Knoxville, United States of America; 2U.S. Army Combat Capabilities Development Command, United States of America

Correspondence: David Boothe (david.l.boothe7.civ@mail.mil)

BMC Neuroscience 2020, 21(Suppl 1):P127

A key feature in electroencephalograms (EEG) is the existence of distinct oscillatory components – theta (4-7 Hz), alpha (8-13Hz), beta (14-30Hz), and gamma (40-100Hz). Cross frequency coupling has been observed between these frequency bands in both the local field potential (LFP) and electroencephalogram (EEG). While the association between activity in distinct oscillatory frequencies and important brain functions is well established, the functional role of cross frequency coupling is poorly characterized, but has been hypothesized to underlie cortical functions like working memory, learning, and computation [1,2].

The most common form of cross frequency coupling observed in brain activity recordings is the modulation of the amplitude of a higher frequency oscillation by the phase of a lower frequency oscillation, a phenomenon known as phase-amplitude coupling (PAC). We present a method for detecting PAC in signals that avoids some pitfalls in existing methods and combines techniques developed in the field of topological data analysis (TDA). When analyzing data using TDA, an object called a persistence diagram, is commonly constructed. In the case of time series the persistence diagram that is generated represents compactly all the peaks and valleys that occur in the signal. We inspect the persistence diagrams to detect the presence of phase-amplitude coupling using the intuition that PAC will impart asymmetry to the upper and lower segments of the diagram. This representation of the data has the advantage that it does not require the choice of Fourier analysis parameters, binning sizes, and phase estimations that are necessary in current methods [3].

We test the performance of our metric on two kinds of synthetic signals, the first is a phenomenological model with varying levels of phase-amplitude coupling [4] as defined by the Kullback-Liebler divergence from the uniform case of signals with no PAC. The second is from simulated single cell neuronal data based on a layer 5 pyramidal cell [5,6]. Finally, we benchmark this method against methods explored previously [4] in EEG data recorded from human subjects.

References

  1. 1.

    VanRullen R, Koch C. Is perception discrete or continuous?. Trends in cognitive sciences. 2003; 7(5): 207-13.

  2. 2.

    Canolty RT, Knight RT. The functional role of cross-frequency coupling. Trends in Cognitive Sciences. 2010; 14(11), 507-515.

  3. 3.

    Cohen MX. Assesing transient cross-frequency coupling in EEG data. Journal of Neuroscience Methods. 2008; 168: 494-499.

  4. 4.

    Tort ABL, Komorowski R, Eichenbaum H, Kopell N. Measuring Phase-Amplitude Coupling Between Neural Oscillations of Different Frequencies. Journal of Neurophysiology. 2010; 104: 1195-210.

  5. 5.

    Felton Jr MA, Yu AB, Boothe DL, Oie KS, Franaszczuk PJ. Resonance analysis as a tool for characterizing functional division of layer 5 pyramidal neurons. Frontiers in Computational Neuroscience. 2018; 12:29.

  6. 6.

    Traub RD, et al. Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts. Journal of Neurophysiology. 2005; 93(4): 2194-232.

P128 An integrate-and-fire model of narrow band modulation in mouse visual cortex

Nicolò Meneghetti1,2, Alberto Mazzoni1,2

1The Biorobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy; 2Department of Excellence for Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy

Correspondence: Nicolò Meneghetti (nicolo.meneghetti@santannapisa.it)

BMC Neuroscience 2020, 21(Suppl 1):P128

Gamma band neuronal oscillations are involved with sensory processing ubiquitously in the central nervous system. They emerge from the coordinated interaction of excitation and inhibition and are a biological marker of local active network computations [1]. Visual features as contrast and orientation are known to modulate broad band gamma activity in the primary visual cortex (V1) of primates [2]. In mouse V1, however, a narrow band within gamma oscillation was found to display specific functional sensitivity to visual features [3].

Here we present a network of recurrent excitatory-inhibitory spiking neurons reproducing the gamma narrow band dynamics in mouse V1 observed in [3], building on previous works of our group [4,5]. By combining experimental data analysis and simulations, we show that a proper design of the simulated thalamic input results in the network to exhibit both narrow and broad band gamma activity.

We reproduced the spectral and temporal modulations of V1 local field potentials of awake mice presented with gratings of different contrast levels by approximating the thalamic input rate with two linear functions defined over complementary contrast ranges. We propose a theoretical framework in which the external thalamic drive is responsible for inducing the emergence of broad by triggering cortical resonances and narrow band gamma activity by inducing entrainment to an oscillatory drive. Our results support in particular the hypothesis of a subcortical origin of the narrow gamma band [3].

Our network provides a simple and effective model of contrast-induced gamma activity in rodents V1. The model could be easily extended to reproduce the modulation of V1 gamma activity induced by other visual stimulus features. Moreover, the model could help to investigate network dynamics responsible for pathological dysfunctions of physiological visual information processing in mice.

References

  1. 1.

    Buzsáki G, Wang XJ. Mechanisms of gamma oscillations. Annual Review of Neuroscience. 2012; 35: 203-25.

  2. 2.

    Henrie JA, Shapley R. LFP power spectra in V1 cortex: the graded effect of stimulus contrast. Journal of Neurophysiology. 2005; 94(1): 479-90.

  3. 3.

    Saleem AB, et al. Subcortical source and modulation of the narrowband gamma oscillation in mouse visual cortex. Neuron. 2017; 93(2): 315-22.

  4. 4.

    Mazzoni A, Brunel N, Cavallari S, Logothetis NK, Panzeri S. Cortical dynamics during naturalistic sensory stimulations: experiments and models. Journal of Physiology-Paris. 2011; 105(1-3): 2-15.

  5. 5.

    Mazzoni A, Linden H, Cuntz H, Lansner A, Panzeri S, Einevoll GT. Computing the local field potential (LFP) from integrate-and-fire network models. PLoS Computational Biology. 2015; 11(12): e1004584.

P129 Correlated inputs to the striatum generate beta over-synchronization in in silico cortico-basal ganglia network

Elena Manferlotti1, Matteo Vissani2, Arvind Kumar3, Alberto Mazzoni2

1Scuola Superiore Sant’Anna, The Biorobotics Institute and Department of Excellence for Robotics and AI, Pisa, Italy; 2Scuola Superiore Sant’Anna Pisa, The Biorobotic institute, Pisa, Italy; 3KTH Royal Institute of Technology, Computational Brain Science, Stockholm, Sweden

Correspondence: Elena Manferlotti (elena.manferlotti@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P129

Parkinson’s Disease (PD) is known to be associated with over-synchronized oscillations in the beta frequency range (13-35Hz) in motor cortex and basal ganglia (BG) [1]. Although the mechanisms underlying the emergence of these oscillations are poorly understood, several excitatory-inhibitory loops have been identified in the cortex-BG networks that might initiate or generate them.

Recent experimental data suggests that striatal spiny projection neurons (SPNs) are phase locked to the beta oscillation cycles [2]. Indeed, transient change in the SPNs firing rate is sufficient to unleash oscillations into the mutually connected globus pallidus externus (GPe) sub-thalamic nucleus (STN) network [3].

Here, we investigate the effect of temporal synchrony of SPNs activity on beta oscillations simulating a biologically plausible BG model with spiking neurons [4]. The likely source of correlations in the SPNs is thalamo-cortical input, since striatal connectivity is too sparse. Therefore, we injected correlated inputs to the SPNs. Our model showed the emergence of beta band aberrant synchronization as the network switches from the uncorrelated to the correlated input. Crucially, inputs displayed a fixed firing rate, that is the over-synchronization emerged only because of the input synchrony. Furthermore, increased input correlation resulted in enhanced Globus Pallidus internus (GPi) firing rate as observed experimentally [5]. Next, we investigated the possible consequences of these results for Deep Brain Stimulation (DBS) simulating high frequency injections into the STN. Our preliminary results showed that even a short window of stimulation was enough to reduce beta oscillations in the firing rate of STN, GPe and GPi nuclei.

Our study provides innovative observations about the origin and propagation of PD-related beta oscillations in the BG and their reduction due to DBS. It paves the way toward in silico testing of DBS parameters that could be used to determine optimal parameters of stimulation offline rather than during surgical implants.

References

  1. 1.

    Oswal A, Brown P, Litvak V. Synchronized neural oscillations and the pathophysiology of Parkinson’s disease. Current Opinion in Neurology. 2013; 26: 662–70.

  2. 2.

    Sharott A, et al. A Population of Indirect Pathway Striatal Projection Neurons Is Selectively Entrained to Parkinsonian Beta Oscillations. Journal of Neuroscience. 2017; 37: 9977–98.

  3. 3.

    Mirzaei A, et al. Sensorimotor processing in the basal ganglia leads to transient beta oscillations during behavior. Journal of Neuroscience. 2017; 37(46): 11220-32.

  4. 4.

    Lindahl M, Hellgren Kotaleski J. Untangling Basal Ganglia Network Dynamics and Function: Role of Dopamine Depletion and Inhibition Investigated in a Spiking Network Model. eNeuro. 2016; 3.

  5. 5.

    Magnin M, Morel A, Jeanmonod D. Single-unit analysis of the pallidum, thalamus and subthalamic nucleus in parkinsonian patients. Neuroscience. 2000; 96: 549–64.

P130 Unifying information theory and machine learning in a model of cochlear implant electrode discrimination

Xiao Gao1, David Grayden1, Mark McDonnell2

1University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia; 2University of South Australia, School of Information Technology and Mathematical Sciences, Adelaide, Australia

Correspondence: Xiao Gao (xiao.gao@unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P130

Despite the success of cochlear implants (CIs) over more than three decades, wide inter-subject variability in speech perception is reported [1]. The key factors that cause variability between users are unclear. We previously developed an information theoretic modelling framework that enables estimation of the optimal number of electrodes and quantification of electrode discrimination ability [2,3]. However, the optimal number of electrodes was estimated based only on statistical correlations between channel outputs and inputs, and the model did not quantitatively model psychophysical measurements and study inter-subject variability.

Here, we unified information theoretic and machine learning techniques to investigate the key factors that may limit the performance of CIs. The framework used a neural network classifier to predict which electrode was stimulated for a given simulated activation pattern of the auditory nerve, and mutual information was then estimated between the actual stimulated electrode and the predicted one.

Using the framework, electrode discrimination was quantified with a range of parameter choices, as shown in Figure 1. The columns from left to right show how the distance between electrodes and auditory nerve fibres, r, the number of surviving fibres, N, the maximum current level (modelled as the percentage of surviving fibres, N, that generate action potentials for a given stimulated electrode), and the attenuation in electrode current, A, affect the model performance, respectively. The parameters were chosen to reflect the key factors that are believed to limit the performance of CIs. The model shows sensitivity to parameter choices, where smaller r, larger N, and higher attenuation in current lead to higher mutual information and improved classification.

Fig. 1
figure49

Model performance with a range of parameter choices

This approach provides a flexible framework that may be used to investigate the key factors that limit the performance of cochlear implants. We aim to investigate its application to personalised configurations of CIs.

Acknowledgments: This work is supported by a McKenzie Fellowship, The University of Melbourne.

References

  1. 1.

    Holden LK, et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear and hearing. 2013; 34(3): 342.

  2. 2.

    Gao X, Grayden DB, McDonnell MD. Stochastic information transfer from cochlear implant electrodes to auditory nerve fibers. Physical Review E. 2014; 90(2): 022722.

  3. 3.

    Gao X, Grayden DB, McDonnell MD. Modeling electrode place discrimination in cochlear implant stimulation. IEEE Transactions on Biomedical Engineering. 2016; 64(9): 2219-29.

P131 Unsupervised metadata tagging of computational neuroscience literature, towards question answering

Evan Cudone1, Robert McDougal2

1Yale University, Computational Biology and Bioinformatics, New Haven, Connecticut, United States of America; 2Yale University, Department of Neuroscience, New Haven, Connecticut, United States of America

Correspondence: Evan Cudone (evan.cudone@yale.edu)

BMC Neuroscience 2020, 21(Suppl 1):P131

Curation and knowledge dissemination of the computational neuroscience field requires many unique considerations as it utilizes language, methods, and ideas from diverse areas including biology, chemistry, physics, mathematics, medicine, and computer science. In order to effectively facilitate curation and knowledge dissemination for the computational neuroscience community we must first develop a robust representation of its existing literature. Using unsupervised topic modeling approaches, a metadata tagging schema was developed for computational neuroscience literature from ModelDB (a repository of computational neuroscience models), and compared to that of a larger neuroscience corpus. This analysis shows key differences in the types of discoveries and knowledge addressed in neuroscience and its computational subdiscipline, and gives insight into how an automated question answering system might differ between the two.

P132 Preventing retinal ganglion cell axon bundle activation with oriented rectangular electrodes

Wei Tong1, Michael R Ibbotson2, Hamish Meffin3

1National Vision Research Institute, Melbourne, Australia; 2Australian College of Optometry, The National Vision Research Institute, Melbourne, Australia; 3University of Melbourne, Biomedical Engineering, Melbourne, Australia

Correspondence: Wei Tong (wei.tong@unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P132

Retinal prostheses can restore visual sensations in people that have lost their photoreceptors by electrically stimulating surviving retinal ganglion cells (RGCs). Currently, there are mainly three types of retinal prostheses under development, based on their implantation locations: epi-retinal, sub-retinal and suprachoroidal [1]. Clinical studies from all three types of devices indicate that, although a sense of vision can be restored, the visual acuity obtained is limited and functional vision, such as navigation and facial recognition remains challenging. One major difficulty is associated with the low spatial resolution obtained from electrical stimulation, i.e. the large spread of activation amongst RGCs leads to blurred or distorted visual percepts. Particularly, with epi-retinal implants, experiments have revealed that the leading cause of widespread activation is the unintended activation of passing RGC axons, which lead to elongated phosphines in patients [2].

This work proposes to use rectangular electrodes oriented parallel to the axon bundles to prevent the activation of passing axon bundles. Here, we first used simulation to investigate the interaction of neural tissue orientation and stimulation electrode configuration on the RGC activation patterns. A four-layer computational model of epiretinal extracellular stimulation that captures the effect of neurite orientation in anisotropic tissue was applied, as previously described [3], using a volume conductor model known as the cellular composite model. As shown in Figure 1a, our model shows that stimulating with rectangular electrode aligned with the nerve fiber layer (i.e. passing axon bundles), can be used to achieve selective activation of axon initial segments, rather than passing fibers.

Fig. 1
figure50

Simulated membrane potentials a and experimental calcium imaging b results both indicate that a rectangular electrode oriented parallel to the axon bundles can lead to localised RGC activation by avoiding the unintended activation of passing axon bundles

The simulation results were then confirmed with experiments. Here, data were acquired from adult Long Evan rats by recording the response of RGCs from whole-mount retina preparations using calcium imaging. Electrical stimulation was delivered through a diamond coated carbon fiber electrode with a length of 200µm and diameter of 10µm. The electrode was placed either parallel or perpendicular to the RGC axon bundles. Biphasic stimuli with different pulse durations of 33-500µs were tested. Our experimental observations (Fig. 1b) are consistent with the expectations of the simulations, and the use of rectangular electrodes placed parallel to axon bundles can significantly reduce the activation of RGC axon bundles. When using biphasic stimulation as short as 33µs, the activated RGCs were mostly confined to the region below or very close-to the electrode, as observed using confocal microscopy.

To conclude, this work provides a stimulation strategy for reducing the spread of RGC activation for epi-retinal prostheses. Using ultrashort pulses together with rectangular electrodes parallel to the RGC axon bundles, the performance of epi-retinal prostheses will be improved significantly, thus promising to restore a higher quality of vision to the blind.

References

  1. 1.

    Weiland JD, Walston ST, Humayun MS. Electrical stimulation of the retina to produce artificial vision. Annual Review of Vision Science. 2016; 2: 273-94.

  2. 2.

    Nanduri D, et al. Frequency and amplitude modulation have different effects on the percepts elicited by retinal stimulation. Investigative ophthalmology & visual science. 2012; 53(1): 205-14.

  3. 3.

    Esler TB, et al. Minimizing activation of overlying axons with epiretinal stimulation: The role of fiber orientation and electrode configuration. PloS one. 2018; 13(3): e0193598.

P133 Recurrent neural networks trained in multisensory integration tasks reveal a diversity of selectivity and structural properties

Amparo Gilhuis1, Shirin Dora2, Cyriel Pennartz1, Jorge Mejias1

1University of Amsterdam, Swammerdam Institute for Life Sciences, Amsterdam, Netherlands; 2Ulster University, Intelligent Systems Research Centre, Londonderry, United Kingdom

Correspondence: Jorge Mejias (jorge.f.mejias@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P133

The brain continuously processes sensory information from multiple modalities, giving rise to internal representations of the outside world. If and how the information from multiple modalities is being integrated has extensively been investigated over the past years, leading to more insight in multisensory integration (MSI) and its underlying mechanisms [1]. However, the different experimental paradigms used to investigate MSI involve different cognitive resources and situational demands. In this study, we investigated how different experimental paradigms of MSI reflect on behavior output and in their corresponding neural activity patterns. We did so by designing a recurrent neural network (RNN) with the biological plausible feature of differentiating between excitatory and inhibitory units [2]. For each of the three multisensory processing tasks considered [3,4], an RNN was optimized to perform the tasks with similar performance as found in animals. Network models trained on different experimental paradigms showed significant distinct selectivity and connectivity patterns. Selectivity for both modality and choice was found in network models that were trained on the paradigm that involved higher cognitive resources. Network models trained on paradigms that involve more bottom-up processes mostly experienced choice selectivity. Increasing the level of network noise in network models that at first did not experience modality selectivity led to an increase in modality selectivity. We propose that a higher range of selectivity arises when a task is more demanding, either due to higher network noise (which makes the task harder for the animal) or a more difficult experimental paradigm. The higher range of selectivity is thought to improve the flexibility of the network model, which could be a necessity for the network models to achieve good performance, and the resulting neural heterogeneity could be used for more general information processing strategies [5,6].

Acknowledgements: This work was partially supported by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 785907.

References

  1. 1.

    Chandrasekaran C. Computational principles and models of multisensory integration. Current Opinion in Neurobiology. 2017; 43: 25-34.

  2. 2.

    Song HF, Yang GR, Wang XJ. Training excitatory-inhibitory recurrent neural networks for cognitive tasks: a simple and flexible framework. PLoS Computational Biology. 2016; 12(2): e1004792.

  3. 3.

    Raposo D, Kaufman MT, Churchland AK. A category-free neural population supports evolving demands during decision-making. Nature Neuroscience. 2014; 17: 1784.

  4. 4.

    Meijer GT, Pie JL, Dolman TL, Pennartz C, Lansink CS. Audiovisual integration enhances stimulus detection performance in mice. Frontiers in Behavioral Neuroscience. 2018; 12: 231.

  5. 5.

    Mejias JF, Longtin A. Optimal heterogeneity for coding in spiking neural networks. Physical Review Letters. 2012; 108(22): 228102.

  6. 6.

    Mejias JF, Longtin A. Differential effects of excitatory and inhibitory heterogeneity on the gain and asynchronous state of sparse cortical networks. Frontiers in Computational Neuroscience. 2014; 8: 107.

P134 A fluid hierarchy in multisensory integration properties of large-scale cortical networks

Ronaldo Nunes1, Marcelo Reyes1, Raphael de Camargo1, Jorge Mejias2

1Universidade Federal do ABC, Center for Mathematics, Computing and Cognition, Santo Andre, Brazil; 2University of Amsterdam, Swammerdam Institute for Life Sciences, Amsterdam, Netherlands

Correspondence: Jorge Mejias (jorge.f.mejias@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P134

A fundamental ingredient for perception is the integration of information from different sensory modalities. This process, known as multisensory integration (MSI), has been studied extensively using animal and computational models [1]. It is not yet clear, however, how different brain areas contribute to MSI, and identifying relevant areas remains challenging. Part of the reason is that simultaneous electrophysiological recordings from different brain areas has developed only recently [2], and the intensity, noise profile and delay responses are diverse for different sensory signals [1]. Furthermore, computational models have traditionally focused only on a few areas, a limitation imposed by the lack of reliable anatomical data on brain networks. We present here a theoretical and computational study of the mechanisms underlying MSI in the mouse brain, by constraining our model with a recently acquired anatomical brain connectivity dataset [3]. Our simulations of the resulting large-scale cortical network reveal the existence of a hierarchy of crossmodal excitability properties, with areas at the top of the hierarchy being the best candidates for integrating information from multiple modalities. Furthermore, our model predicts that the position of a given area in such hierarchy is highly fluid and depends on the strength of the sensory input received by the network. For example, we observe that the particular set of areas integrating visuotactile stimuli changes depending on the level of visual contrast. By simulating a simplified network model and developing its corresponding mean-field approximation, we determine that the origin of such hierarchical dynamics is the structural heterogeneity of the network, which is a salient property of cortical networks [3,4]. Finally, we extend our results to macaque cortical networks [5] to show that the hierarchy of crossmodal excitability is also present in other mammals, and we characterize how frequency-specific interactions are affected by hierarchical dynamics and define functional connectivity [6]. Our work provides a compelling explanation as of why is it not possible to identify unique MSI areas even for a well-defined multisensory task, and suggests that MSI circuits are highly context-dependent.

Acknowledgements: This study was financed in part by the University of Amsterdam and the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior Brasil (Capes) - Finance Code 001.

References

  1. 1.

    Chandrasekaran C. Computational principles and models of multisensory integration. Current Opinion in Neurobiology. 2017; 43: 25-34.

  2. 2.

    Hong G, Lieber CM. Novel electrode technologies for neural recordings. Nature Reviews Neuroscience. 2019; 20(6): 330-45.

  3. 3.

    Gamanut R, et al. The mouse cortical connectome, characterized by an ultra-dense cortical graph, maintains specificity by distinct connectivity profiles. Neuron. 2018; 97: 698-715.

  4. 4.

    Mejias JF, Longtin A. Differential effects of excitatory and inhibitory heterogeneity on the gain and asynchronous state of sparse cortical networks. Frontiers in Computational Neuroscience. 2014; 8: 107.

  5. 5.

    Mejias JF, Murray JD, Kennedy H, Wang XJ. Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex. Science Advances. 2016; 2(11): e1601335.

  6. 6.

    Nunes RV, Reyes MB, De Camargo RY. Evaluation of connectivity estimates using spiking neuronal network models. Biological Cybernetics. 2019; 113(3): 309-20.

P135 A cortical model examining mismatch negativity deficits in schizophrenia

Gili Karni1, Christoph Metzner2

1Minerva at KGI, San Francisco, California, United States of America; 2Technische Universität Berlin, Department of Software Engineering and Theoretical Computer Science, Berlin, Germany

Correspondence: Christoph Metzner (cmetzner@ni.tu-berlin.de)

BMC Neuroscience 2020, 21(Suppl 1):P135

Recent advances in computational modeling, genome-wide association studies, neuroimaging, and theoretical neuroscience pose better opportunities to study neuropsychiatric disorders, such as schizophrenia (SZC) [1]. However, despite a repeated examination of its well-characterized phenotypes, our understanding of SZC’s neurophysiological biomarkers or cortical dynamics remain elusive.

This study presents a biophysical spiking neuron model of perceptual inference, based on the predictive coding framework [2]. The model, implemented in NetPyNE [3], incorporates various single-cell models of both excitatory and inhibitory neurons [4,5], mimicking the circuits of the primary auditory cortex. This model allows for the exploration of the effects bio-genetic variants (expressed via ion-channels or synaptic mechanism alterations, see [6]) have on auditory mismatch negativity (MMN) deficits, a common biomarker for SZC [7]. More particularly, the model distinguishes between repetition suppression and prediction error and examines their respective contribution to the MMN. The first part of this report establishes the model’s explanatory power using two well-known paradigms: the oddball paradigm and the cascade paradigm. Both can reproduce the electrophysiological measures of the MMN among healthy subjects. Later, via tuning the parameters of single-neuron equations or the network’s synaptic weights, the model exhibits the expected LFP changes associated with SZC [8].

Therefore, this model enables exploring how biogenetic alterations affect the underlying components of the observed MMN deficits. Novel, yet preliminary, predictions are presented and suggested future steps for validations are listed. This model could support studies exploring genetic effects on the MMN (or other aspects of predictive coding) in the auditory cortex.

References

  1. 1.

    Krystal JH, et al. Computational psychiatry and the challenge of schizophrenia. 2017.

  2. 2.

    Bastos AM, et al. Canonical microcircuits for predictive coding. Neuron. 2012; 76(4): 695-711.

  3. 3.

    Garrido MI, Kilner JM, Stephan KE, Friston KJ. The mismatch negativity: a review of underlying mechanisms. Clinical Neurophysiology. 2009; 120(3): 453-63.

  4. 4.

    Beeman, D. Comparison with human layer 2/3 pyramidal cell dendritic morphologies. Poster session presented at the meeting of Society for Neuroscience 2018, San Diego.

  5. 5.

    Vierling-Claassen D, Cardin J, Moore CI, Jones SR. Computational modeling of distinct neocortical oscillations driven by cell-type selective optogenetic drive: separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons. Frontiers in Human Neuroscience. 2010 Nov; 4: 198.

  6. 6.

    Mäki-Marttunen T, et al. Functional effects of schizophrenia-linked genetic variants on intrinsic single-neuron excitability: a modeling study. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 2016; 1(1): 49-59.

  7. 7.

    Dura-Bernal S, et al. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. Elife. 2019; 8: e44494.

  8. 8.

    Michie PT, Malmierca MS, Harms L, Todd J. The neurobiology of MMN and implications for schizophrenia. Biological psychology. 2016; 116: 90-7.

P136 Effect of independent noise on the synchronization of interacting excitatory-inhibitory networks

Lucas Rebscher1, Christoph Metzner2

1Technische Universität Berlin, Neural Information Processing, Berlin, Germany; 2Technische Universität Berlin, Department of Software Engineering and Theoretical Computer Science, Berlin, Germany

Correspondence: Lucas Rebscher (lucas.rebscher@campus.tu-berlin.de)

BMC Neuroscience 2020, 21(Suppl 1):P136

Gamma rhythms play a major role in different processes in the brain, such as attention, working memory and sensory processing. The communication-through-coherence (CTC) hypothesis [1,2] suggests that synchronization in the gamma band is one of the key mechanisms in neuronal communication and counterintuitively noise can have beneficial effects on the communication [3].

Recently, Meng et al. [4] showed that synchronization across interacting networks of inhibitory neurons increases while synchronization within these networks decreases when neurons are subject to independent noise. They focused on inhibitory-inhibitory connections with gamma band activity produced by the interneuronal network gamma mechanism (ING). However, experimental and modeling studies [5] point towards an important role of the pyramidal-interneuronal network gamma (PING) mechanism in the cortex and the established view is that cortico-cortical connections are predominately excitatory [6].

We build up on Meng et al. [4] results and intend to verify if their findings can be observed in interacting gamma rhythms produced by a PING mechanism. In our ongoing research we model interacting excitatory-inhibitory networks and analyze how synchronization changes depending on strength and correlation of noise in different network settings. We expect to see the same effect in our model as (1) the delay of spiking by inhibition is integral to both ING and PING and (2) Meng et al. [4] replicated the effect in different neuron models as well as relaxation oscillators.

Uncovering whether, and if yes, under which conditions, stochastic fluctuations can also have beneficial effects on gamma oscillations produced by a PING mechanism, would further our understanding of inter-regional communication. However, importantly, it might also yield mechanistic explanations for altered neuronal dynamics in psychiatric disorders, since for example, disturbances in neuronal oscillations in the gamma band, especially reduced synchronization, are a key finding in schizophrenia [7].

References

  1. 1.

    Fries, P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in Cognitive Sciences. 2005; 9(10): 474-480.

  2. 2.

    Fries P. Rhythms for cognition: communication through coherence. Neuron. 2015, 88(1), 220-235.

  3. 3.

    McDonnell MD, Ward LM. The benefits of noise in neural systems: bridging theory and experiment. Nature Reviews Neuroscience. 2011; 12(7): 415-425.

  4. 4.

    Meng JH, Riecke H. Synchronization by uncorrelated noise: interacting rhythms in interconnected oscillator networks. Scientific Reports. 2018; 8(1): 1-4.

  5. 5.

    Tiesinga P, Sejnowski TJ. Cortical enlightenment: are attentional gamma oscillations driven by ING or PING?. Neuron. 2009; 63(6): 727-732.

  6. 6.

    Lodato S, Arlotta P. Generating neuronal diversity in the mammalian cerebral cortex. Annual Review of Cell and Developmental Biology. 2015; 31: 699-720.

  7. 7.

    Uhlhaas P, Singer W. Oscillations and neuronal dynamics in schizophrenia: the search for basic symptoms and translational opportunities. Biological Psychiatry. 2015; 77(12): 1001-1009.

P137 General anesthesia reduces complexity and temporal asymmetry of the informational structures derived from neural recordings in Drosophila

Roberto Munoz1, Angus Leung2, Aidan Zecevik1, Felix Pollock1, Dror Cohen3, Bruno can Swinderen4, Naotsugu Tsuchiya5, Kavan Modi1

1Monash University, School of Physics and Astronomy, Melbourne, Australia; 2Monash University, School of Psychological Sciences, Melbourne, Australia; 3National Institute of Information and Communications Technology, Osaka, Japan; 4The University of Queensland, St Lucia, Australia; 5Monash University, School of Psychological Sciences and Turner Institute for Brain and Mental Health, Melbourne, Australia

Correspondence: Roberto Munoz (roberto.munoz@monash.edu)

BMC Neuroscience 2020, 21(Suppl 1):P137

We apply techniques from the field of computational mechanics to evaluate the statistical complexity of neural recording data from fruit flies. First, we connect statistical complexity to the flies’ level of conscious arousal, which is manipulated by general anaesthesia (isoflurane). We show that the complexity of even single channel time series data decreases under anaesthesia. The observed difference in complexity between the two states of conscious arousal increases as higher orders of temporal correlations are taken into account. We then go on to show that, in addition to reducing complexity, anaesthesia also modulates the informational structure between the forward and reverse-time neural signals. Specifically, using three distinct notions of temporal asymmetry we show that anaesthesia reduces temporal asymmetry on information-theoretic and information-geometric grounds. In contrast to prior work, our results show that: (1) Complexity differences can emerge at very short time scales and across broad regions of the fly brain, thus heralding the macroscopic state of anaesthesia in a previously unforeseen manner, and (2) that general anaesthesia also modulates the temporal asymmetry of neural signals. Together, our results demonstrate that anaesthetised brains become both less structured and more reversible.

P138 Processing Capacity of recurrent spiking networks

Tobias Schulte to Brinke1, Fahad Khalid2, Renato Duarte3, Abigail Morrison1

1Forschungszentrum Jülich, Jülich, Germany; 2Forschungszentrum Jülich, Institute for Advanced Simulation and Jülich Supercomputing Centre, Jülich, Germany; 3Jülich Research Center, Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich, Germany

Correspondence: Tobias Schulte to Brinke (t.schulte.to.brinke@fz-juelich.de)

BMC Neuroscience 2020, 21(Suppl 1):P138

One of the most prevalent characteristics of neurobiological systems is the abundance of recurrent connectivity. Regardless of the spatial scale considered, recurrence is a fundamental design principle and a core anatomical feature, permeating the micro-, meso- and macroscopic levels. In essence, the brain (and, in particular, the mammalian neocortex) can be seen as a large recurrent network of recurrent networks. Despite the ubiquity of these observations, it remains unclear whether recurrence and the characteristics of its biophysical properties correspond to important functional specializations and if so, to what extent.

Intuitively, from a computational perspective, recurrence allows information to be propagated in time, i.e. past information reverberates so as to influence online processing, endowing the circuits with memory and sensitivity to temporal structure. However, even in its simpler formulations, the functional relevance and computational consequences of recurrence in biophysical models of spiking networks are not clear or unambiguous and its effects vary depending on the type and characteristics of the system under analysis and the nature of the computational task. Therefore, it would be extremely useful, from both an engineering and a neurobiological perspective, to know to what extent is recurrence necessary for neural computation.

In this work, we set out to quantify the extent to which recurrence modulates a circuit’s computational capacity, by systematically measuring its ability to perform arbitrary transformations on an input, following [1]. By varying the strength and density of recurrent connections in balanced networks of spiking neurons, we evaluate the effect of recurrence on the complexity of the transformations the circuit can carry out and on the memory it is able to sustain. Preliminary results demonstrates some constraints on recurrent connectivity that optimize its processing capabilities for mappings that involve both linear memory and varying degrees of nonlinearity.

Additionally, given that the metric we employ is particularly computationally-heavy (evaluating the system’s capacity to represent thousands of target functions), a careful optimization and parallelization strategy is employed, enabling its application to networks of neuroscientific interest. We present a highly scalable and computationally efficient software, which pre-computes the thousands of necessary target polynomial functions for each point in a large combinatorial space, accesses these target functions through an efficient lookup operation, caches functions that need to be called multiple times with the same inputs and optimizes the most compute-intensive hotspots with Cython. In combination with MPI for internode communication this results in a highly scalable and computationally efficient implementation to determine the processing capacity of a dynamical system.

Acknowledgments: The authors gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA at Forschungszentrum Jülich.

Reference

  1. 1.

    Dambre J, Verstraeten D, Schrauwen B, Massar S. Information Processing Capacity of Dynamical Systems. Scientific Reports. 2012; 2: 514.

P139 Stereotyped population dynamics in the medial entorhinal cortex

Soledad G Cogno1, Flavio Donato2, Horst A Obenhaus1, Irene R Jacobsen1, May-Britt Moser1, Edvard I Moser1

1NTNU, Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Trondheim, Norway; 2University of Basel, Biozentrum, Basel, Switzerland

Correspondence: Soledad G Cogno (soledad.g.cogno@ntnu.no)

BMC Neuroscience 2020, 21(Suppl 1):P139

The medial entorhinal cortex (MEC) supports the brain’s representation of space with distinct cell types (grid, border, object-vector, head-directions and speed cells). Since no single sensory stimulus can faithfully predict the firing of these cells, attractor network models postulate that spatially-tuned firing emerges from specific connectivity motives. To determine how those motives constrain the self-organized activity in the MEC, we tested mice in a spontaneous locomotion task under sensory-deprived conditions, when activity likely is determined by the intrinsic structure of the network. Using 2-photon calcium imaging, we monitored the activity of large populations of MEC neurons in mice running on a wheel in darkness.

To reveal network dynamics we applied dimensionality reduction techniques to the spike matrix. This way we unveiled the presence of motifs that involve the sequential activation of neurons (“waves”). Waves lasted from tens of seconds to minutes, swept through the entire network of active cells and did not exhibit any anatomical organization. Waves were not found in spike-time-shuffled data. Furthermore, waves did not map the position of the mouse on the wheel and were not restricted to running epochs. Single neurons exhibited a wide range of locking degrees to the waves, indicating that the observed dynamics is a population effect rather than a single cell phenomenon. Overall, our results suggest that a large fraction of MEC-L2 neurons participates in common global dynamics that often takes the form of stereotyped waves. These activity patterns might couple the activity of neurons with distinct tuning characteristics in MEC.

P140 How hard are NP-complete problems for humans?

Pablo Franco, Karlo Doroc, Nitin Yadav, Peter Bossaerts, Carsten Murawski

University of Melbourne, Department of Finance, Melbourne, Australia

Correspondence: Pablo Franco (jfranco1@student.unimelb.edu.au)

BMC Neuroscience 2020, 21(Suppl 1):P140

It is widely accepted that humans have limited cognitive resources and that these finite resources impose restrictions on what the brain can compute. Although endowed with limited computational power, humans are still presented daily with decisions that require solving complex problems. This raises a tension between computational capacity and the computational requirements of solving a problem. In order to understand how hardness of problems affect problem-solving ability we propose a measure to quantify the difficulty of problems for humans. For this we make use of computational complexity theory, a widely studied theory used to quantify the hardness of problems for electronic computers. It has been proposed that computational complexity theory can be applied to humans, but it remains an open empirical question whether this is the case.

We study how difficulty of problems affects decision quality in complex problems by studying a measure of expected difficulty over random instances (i.e. random cases) of a problem. This measure, which we refer to as instance complexity (IC), quantifies the expected hardness of a decision problems; that is, problems that have a yes/no answer. More specifically, this measure captures how constrained the problem is, based on a small number of features of the instance. Overall, IC has three main advantages. Firstly, it is a well-studied measure that has been proven to be applicable to a large range of problems for electronic computers. Secondly, it allows calculation of expected hardness of a problem ex-ante, that is, before solving the problem. And lastly, it captures complexity that is independent of a particular algorithm or model of computation. Thus, it is considered to characterize the inherent computational complexity of random instances, which is independent of the system solving it.

In this study we test whether IC is a generalizable measure, for humans, of the expected hardness of solving a problem. For this purpose, we ran a set of experiments in which human participants solved a set of instances of one of three widely studied NP-Complete problems, namely the Traveling Salesperson, the Knapsack Problem or Boolean Satisfiability. Instances varied in their IC. We show that participants expended more effort on instances with higher IC, but that decision quality was lower in those instances. Together, our results suggest that IC can be used to measure the expected computational requirements of solving random instances of a problem, based on an instance’s features.

The findings of this study speak to the broader question of whether there is a link between the computation model in humans and electronic computers. Specifically, this study gives evidence that the average hardness of random instances can be characterized via the same set of parameters for both computing systems. This provides support that computational complexity theory applies to humans. Moreover, we argue that decision-makers could use IC to estimate the expected costs of performing a task. One reason is that the estimation of IC can be done without having to solve the problem. Furthermore, the results of this study suggest that IC captures the hardness of a random instance. Most importantly, our findings suggest that people modulate their effort according to IC. Altogether, this generates future avenues for research, based on IC, that could shed light into the cognitive resource allocation process in the brain.

P141 A synthetic likelihood solution to the silent synapse estimation problem

Michael Lynn1, Jean-Claude Beique1, Kevin Lee1, Cary Soares1, Richard Naud2

1University of Ottawa, Cellular and Molecular Medicine, Ottawa, Canada; 2University of Ottawa, Cellular and Molecular Medicine; Department of Physics, Ottawa, Canada

Correspondence: Michael Lynn (mlynn101@uottawa.ca)

BMC Neuroscience 2020, 21(Suppl 1):P141

Functional features of populations of synapses are typically inferred from random electrophysiological sampling of small subsets of synapses. Are these samples unbiased? Here, we developed a biophysically constrained statistical framework for addressing this question and applied it to assess the performance of a widely used method based on a failure-rate analysis to quantify the occurrence of silent (AMPAR- lacking) synapses in neural networks. We simulated this method in silico and found that it is characterized by strong and systematic biases, poor reliability and weak statistical power. Key conclusions were validated by whole-cell recordings from hippocampal neurons. To address these shortcomings, we developed a simulator of the experimental protocol and used it to compute a synthetic likelihood. By maximizing the likelihood, we inferred silent synapse fraction with no bias, low variance and superior statistical power over alternatives. Together, this generalizable approach highlights how a simulator of experimental methodologies can substantially improve the estimation of physiological properties.

P142 Using reinforcement learning to train biophysically detailed models of visual-motor cortex to play Atari games

Haroon Anwar1, Salvador Dura-Bernal1, Cliff C Kerr2, George L Chadderdon3, William W Lytton4, Peter Lakatos1, Samuel A Neymotin1

1Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America; 2University of Sydney, School of Physics, Sydney, Australia; 3Burnet Institute, Melbourne, Australia; 4SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, New York, United States of America

Correspondence: Haroon Anwar (haroon.anwar@nki.rfmh.org)

BMC Neuroscience 2020, 21(Suppl 1):P142

Computational neuroscientists build biophysically detailed models of neurons and neural circuits primarily to understand the origin of dynamics observed in experimental data. Much of these efforts are dedicated to match ensemble activity of the neurons in the modeled brain region while often ignoring multimodal information flow across brain regions and associated behaviors. Although these efforts have led us to improved mechanistic understanding of electrophysiological behavior of diverse types of neurons and neural networks, these approaches fall short of linking detailed models with associated behaviors in a closed-loop setting. In this study, we bridged that gap by developing biophysically detailed multimodal models of brain regions involved in processing visual information, generating motor behaviors and making associations between visual and motor neural representations by deploying reward-based learning mechanisms. We build a simple model of visual cortex receiving topological inputs from the interfaced Atari-game ‘pong’ environment (provided by the OpenAI’s Gym). This modeled region processed, integrated and relayed visual information about the game environment across the hierarchy of higher order visual areas (V1/V2 -> V4 -> IT). As we moved from V1 to IT, the number of neurons in each area decreased whereas the synaptic connections increased. This feature was included in the model to reflect the anatomical convergence suggested in the literature and to have a broader tuning for input features in progression up the visual cortical hierarchy. We used compartmental models of both excitatory and inhibitory neurons interconnected via AMPA (for excitation) or GABA (for inhibition) synapses. The strengths of synaptic connections were adjusted so that the information was reliably transmitted across visual areas. In our motor cortex model, neurons associated with a particular motor action were grouped together and received inputs from all visual areas. For the game Pong, we used two populations of motor neurons, for generating “up” and “down” move commands. All the synapses between visual and motor cortex were plastic, so that the connection strengths could be increased or decreased via reinforcement learning. When an action was generated in the model of motor cortex driven by visual representation of the environment in the model of visual cortex, that action generated a move in the game, which in turn updated the environment and triggered a response to the action: reward (+1), punishment (-1) or no-response (0). These signals drove the reinforcement learning at the synapses between visual cortex and motor cortex by strengthening or weakening them so that the model could learn which actions were rewarding in a given environment. Here we present an exploratory analysis as a proof-of-concept for using biophysically detailed modeling of neural circuits to solve problems that have so far only been tackled using artificial neural networks. We aim to use this framework to further simplify to make it more deep-learning-like and also to extend the architecture to make it biologically realistic. Comparing the performance of trained models using different architectures will allow us to dissect the mechanisms underlying production of behavior and will bridge the gap between the electrophysiological dynamics of neural circuits and associated behaviors.

P143 Biophysically-detailed multiscale model of macaque auditory thalamocortical circuits reproduces physiological oscillations

Salvador Dura-Bernal1, Erica Y Griffith1, Annamaria Barczak2, Monica N O’Connell2, Tammy McGinnis2, Haroon Anwar3, William W Lytton1, Peter Lakatos3, Samuel A Neymotin3

1SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, New York, United States of America; 2Nathan Kline Institute for Psychiatric Research, Center for Biomedical Imaging and Neuromodulation, Orangeburg, New York, United States of America; 3Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America

Correspondence: Salvador Dura-Bernal (salvadordura@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P143

We used the NEURON simulator with NetPyNE to develop a biophysically-detailed model of the macaque auditory thalamocortical system. We simulated a cortical column with a cortical depth of 2000um and 200um diameter, containing over 12k neurons and 30M synapses. Neuron densities, laminar locations, classes, morphology and biophysics, and connectivity at the long-range, local and dendritic scale were derived from published experimental data (Fig. 1). We used the model to investigate the mechanisms and function of neuronal oscillatory patterns observed in the auditory system in electrophysiological data recorded simultaneously from nonhuman primate primary auditory cortex (A1) and the medial geniculate body (MGB), while the awake subjects were presented with different classes of auditory stimuli, including speech.

Fig. 1
figure51

Dimensions of simulated A1 column with overall laminar cell densities, layer boundaries, cell morphologies and distribution of populations

The model A1 includes 6 cortical layers and multiple populations of neurons consisting of 4 excitatory (intratelencephalic (IT), spiny stellate (ITS), pyramidal-tract (PT), and corticothalamic (CT)), and 4 inhibitory types (somatostatin (SOM), parvalbumin (PV), vasoactive intestinal peptide (VIP), and neurogliaform (NGF)). Cells were distributed across layers 2-6, except NGF cells which were also included in L1, as these have been identified as important targets of the thalamic matrix. The A1 model was reciprocally connected to the thalamic model to mimic anatomically verified connectivity. The thalamic model included the medial geniculate body (MGB) and the thalamic reticular nucleus (TRN). MGB includes core and matrix populations of thalamocortical (TC) neurons with distinct projection patterns to different layers of A1, and thalamic interneurons (TI) projecting locally. TRN included thalamic reticular neurons (RE) primarily inhibiting MGB.

Thalamocortical neurons were driven by artificial spike generators simulating background inputs from non-modeled brain regions. Auditory stimulus related inputs were simulated using phenomenological models of the cochlear auditory nerve and the inferior colliculus (IC) that captured the main physiological transformations occurring in these regions. The output of the IC model was then used to drive the thalamocortical populations. This allowed us to provide any arbitrary sound as input to the model, including those used during our macaque in vivo experiments, thus facilitating matching model to data.

We used evolutionary algorithms to tune the network to generate experimentally-constrained firing rates for each of the 42 neural populations. We tuned 12 high-level connectivity parameters, including background input and E->E, E->I, I->E, I->I weight gains, within parameter value ranges constrained biologically. Each simulated second required approximately 1 hour on 96 supercomputer cores. For the evolutionary optimization we ran 100 simultaneous simulations (9,600 cores) every generation. To the best of our knowledge, this is the first time evolutionary optimization has been successfully used for large-scale biophysically-detailed network models.

We will use our model to determine mechanistic origins of spatiotemporal neuronal oscillatory patterns observed in vivo using an iterative modeling data-analysis process. At the end of the process, to confirm model predictions, we will use targeted deep brain electrical microstimulation and pharmacological manipulations.

Acknowledgements: Funded by NIH NIDCDR01DC012947, U24EB028998, U01EB017695, DOH01-C32250GG-3450000, Army Research Office W911NF-19-1-0402.

P144 Closed loop parameter estimation using GPU acceleration with GeNN

Felix B Kern1, Michael Crossley2, Rafael Levi3, György Kemenes2, Thomas Nowotny4

1University of Tokyo, International Research Center for Neurointelligence, Tokyo, Japan; 2University of Sussex, School of Life Sciences, Brighton, United Kingdom; 3Universidad Autonoma de Madrid, Escuela Politechnica Superior, Madrid, Spain; 4University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom

Correspondence: Thomas Nowotny (t.nowotny@sussex.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P144

A common approach to understanding neuronal function is to build accurate and predictive models of the excitable membrane. Models are typically based on voltage clamp data where ion channels of different types are pharmacologically isolated and the stationary state and timescale of (in)activation are estimated based on the transmembrane currents observed in response to a set of constant voltage steps. The basic method can be extended with different stepping protocols or input waveforms and by performing parameter fits on the full time series. Further improvements are achieved with parameter estimation on additional current clamp data, an active field of research. Some examples of employed estimation approaches include adaptive coupling to synchronise the model to data, driving neurons with chaotic input signals, and using distributions of parameter values in a path integral method.

Enabled by our GPU enhanced neural networks (GeNN) framework [1], we here present work that makes a different conceptual advance of performing parameter estimation in an online closed loop approach while the neuron is being recorded. In doing so we can select stimulations that are highly informative for the parameter estimation process at any given time. We can also track time dependent parameters by observing how parameter estimates develop over time.

To demonstrate our new method we use the model system of the B1 motor cell in the buccal ganglion of the pond snail Lymnaea stagnalis. Neurons are recorded with two sharp electrodes in current clamp mode. We have built a conductance based initial model from a published set of Hodgkin-Huxley conductances [2], using standard parameter estimation methods and data we obtained with a simple set of current steps. To perform closed loop parameter estimation, we use a genetic algorithm (GA) in which a population of 8192 model neurons with candidate parameter values is simulated on a GPU (NVIDIA Tesla K40c) in parallel and in real time. Models are then compared to the response of the recorded neuron and selected for goodness of fit, as is standard for a GA approach. The novel element of our method is the next step, where we evaluate a pool of candidate stimuli against the model population, selecting the stimulus with the most diverse responses for the next epoch. The selected stimulus is then applied to both the recorded neuron and the population of models and the normal GA procedure continues.

Figure 1 shows a representative example of online fitting to a neuron. We first fit a set of 52 parameters to fine-tune model kinetics to the cell under stationary conditions. Then, we restricted fitting to non-kinetic parameters (maximum conductances, equilibrium potentials, and capacitance) and continued to run the algorithm described above, while at the same time manipulating sodium levels in the extracellular bath. The online fitting procedure can detect and track the change in sodium concentration as putative changes in sodium conductance and reversal potential.

Fig. 1
figure52

Example fitting run. Sodium-free saline is washed in (epochs 1-50), and gradually removed (51-100). Left: Voltage of the neuron (red) and best model (blue) in response to a ramp stimulus. Bottom: Spike counts with the same stimulus. Right: Value distributions of some of the fitted parameters (y axis) through time (x axis). The model recapitulates the change in sodium (top 2 plots)

Acknowledgments: This work was partially supported by the EU under Grant Agreement 785907 (HBP SGA2).

References

  1. 1.

    Yavuz E, Turner JP, Nowotny T. GeNN: a code generation framework for accelerated brain simulations. Scientific Reports. 2016; 6: 18854.

  2. 2.

    Vehovszky A, Szabo H, Elliott CJH. Octopamine increases the excitability of neurons in the snail feeding system by modulation of inward sodium current but not outward potassium currents. BMC Neuroscience. 2005; 6: 70.

P145 Possible roles of non-synaptic interactions between olfactory receptor neurons in insects

Mario Pannunzi1, Thomas Nowotny2

1University of Sussex, Department of Informatics, Brighton, United Kingdom; 2University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom

Correspondence: Mario Pannunzi (mario.pannunzi@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P145

In insects, olfactory receptor neurons (ORNs) are grouped in hairs (sensilla) in a stereotypical way. For example, in Drosophila each sensillum houses 2 or 4 ORNs. ORNs that are co-housed in the same sensillum interact with each other via a non-synaptic mechanism (NSI, see Fig. 1a), a mechanism which is still not fully understood. The mechanism could simply be a spandrel or instead improve the function of the insect olfactory system. A number of hypotheses have been suggested [1] trying to explain the potential role of NSIs.

Fig. 1
figure53

a Non-synaptic interaction (NSI) between ORNs is probably mediated by an electrical field interaction between closely apposed ORNs in olfactory sensilla. b Our model: two ORN types and their respective PNs and LNs, in the AL. Each ORN type is tuned to a set of odorants and converges onto its corresponding PNs. Each PN excites its respective LNs, and is inhibited from LNs of the other type

We analyzed two hypotheses that suggest that NSIs play a role in odor sensing in mixtures with a computational model of the first two layers of the Drosophila olfactory system - the ORNs on the antennae and the glomeruli, with projection neurons (PNs) and local neurons (LNs), in the antennal lobe (AL, see Fig. 1b). The model is the first to consider NSIs between ORNs in the context of the circuits of the first and second layer of processing in the insect olfactory pathway. We constrained the model by reproducing the responses to a set of typical odor stimuli reported in the literature. Then, we tested the feasibility of the hypotheses and compared the advantages of having NSIs against a control network which lacked any interaction between ORNs or PNs, and with a network without NSIs but strong lateral inhibition in the AL, a mechanism proposed to be a valid alternative to NSIs.

The two tested hypotheses were: 1) NSIs could improve the concentration ratio identification of a mixture of odorants by increasing the dynamic range over which it can be perceived without distortion. 2) NSIs could help insects to distinguish mixtures of odorants emanating from a single source against those emanating from two separate sources, by improving the capacity to encode the correlation between olfactory stimuli.

For the first hypothesis, we observed that: 1) When comparing the capacity to encode the ratio of the concentration of short synchronous whiffs via PN responses, both networks, the one with NSIs and the one with AL inhibition, outperform the ‘control network’. Moreover, the NSIs help more than the LN inhibition for this task. This effect is stronger for very short stimuli (<100ms) than for longer stimuli. 2) When a network with LN inhibition (with NSIs) is stimulated with asynchronous whiffs of two odorants, its PN outputs in response to the second whiffs are strongly (mildly) altered by the response to the first whiff.

More complex interpretations are needed when assessing the capacity to encode the correlation between two odorants. We noted that: 1) In terms of average PN activity, the network with LN inhibition is able to encode stimulus correlation but the network with NSI mechanism is not, but 2) In terms of peak PN activity (response higher than a given threshold), the network with NSI mechanism encodes correlations better than the one with LN inhibition. This improvement is bigger for shorter whiff durations (<100 ms).

Acknowledgments: This research was funded by the Human Frontiers Science Program, grant RGP0053/2015 (Odor Objects), the European Union under grant agreement 785907 (HBP SGA2) and a Leverhulme Trust Research Project Grant.

Reference

  1. 1.

    Todd JL, Baker TC. Function of peripheral olfactory organs. In: Insect olfaction 1999 (pp. 67-96). Springer, Berlin, Heidelberg.

P146 Larger GPU-accelerated brain simulations with procedural connectivity

James Knight1, Thomas Nowotny2

1University of Sussex, Department of Informatics, Brighton, United Kingdom; 2University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom

Correspondence: James Knight (j.c.knight@sussex.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P146

Large-scale brain simulations are important tools for investigating brains’ dynamics and function. However, due to insufficient computing power and a lack of detailed connectivity data, brain models built at the cellular level have often been limited to the scale of individual microcircuits [1]. Larger models of multiple areas have typically been built at a higher level of abstraction. However, recent data [2] showed that, in the cortex, there are features of neuronal activity which can only be reproduced by modelling multiple interconnected microcircuits at the cellular level.

Even small mammals have trillions of synapses and as each synapse typically requires at least 32 bits of storage in a simulation, a model of this scale requires terabytes of memory – more than any single desktop machine has available. Therefore, until now, simulating large-scale models has required access to a distributed computer system. Such systems are costly and power-hungry, meaning that they are normally shared resources, accessible only to a limited number of well-funded researchers for limited runtimes. Neuromorphic systems are a potential alternative but few are currently able to simulate the density of connectivity found in the brain and most are still prototypes with limited availability. Alternatively, Graphical Processing Units (GPUs) have proved useful in tasks including training deep learning systems. In our previous work [3] we showed that using GeNN – our GPU accelerated spiking neural network simulator – models with around 100×103 neurons and 1×109 synapses could be simulated on a single GPU at a similar speed to supercomputers and neuromorphic systems. However, individual GPUs do not have enough memory to simulate larger brain models and GPU clusters suffer from the same issues as any other distributed computer systems.

Here, we present extensions to GeNN that enable it to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered instead of retrieving them from memory. This approach is well-suited to GPU architectures because their raw computational power is often under-utilised when simulating spiking neural networks due to memory bandwidth limitations. We demonstrate the power of our approach with a model of the Macaque visual cortex consisting of 4×106 neurons and 24×109 synapses [4]. We find that, with our new method, this model can be simulated correctly on a single GPU and up to 35% faster than supercomputer simulations [5]. We believe that this is a significant step towards making large-scale brain modelling accessible to more researchers.

Fig. 1
figure54

Results of macaque visual cortex model simulation. A-C Raster plots of activity of 3% of neurons in 3 areas. Blue: excitatory, red: inhibitory. D-F Statistics for each population across all 32 areas simulated using GeNN and NEST (supercomputer). D Average firing rates. E Average pairwise correlation coefficients. F Average irregularity measured by revised local variation

Acknowledgements: This work was funded by the EPSRC (Grant number EP/P006094/1).

References

  1. 1.

    Potjans T, Diesmann M. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. Cerebral Cortex. 2014; 24: 785-806.

  2. 2.

    Belitski A, et al. Low-frequency local field potentials and spikes in primary visual cortex convey independent visual information. Journal of Neuroscience. 2008; 28(22): 5696-709.

  3. 3.

    Knight J, Nowotny T. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Frontiers in Neuroscience. 2018; 12: 1-19.

  4. 4.

    Schmidt M, et al. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Computational Biology. 2018; 14(10): e1006359.

  5. 5.

    Knight J, Nowotny T. Larger GPU-accelerated brain simulations with procedural connectivity. BioRxiv. 2020. 2020.04.27.063693

P147 Seizure forecasting from long-term EEG and ECG data using critical slowing principle

Wendy Xiong1, Tatiana Kameneva2, Elisabeth Lambert3, Ewan Nurse4

1Swinburne University of Technology, Melbourne, Australia; 2Swinburne University of Technology, Telecommunication Electrical Robotics and Biomedical Engineering, Melbourne, Australia; 3Swinburne University of Technology, School of Health Sciences, Department of Health and Medical Sciences, Melbourne, Australia; 4Seer Medical, Melbourne, Australia

Correspondence: Wendy Xiong (wenjuan.xiong1@gmail.com)

BMC Neuroscience 2020, 21(Suppl 1):P147

Epilepsy is a neurological disorder characterized by recurrent seizures that are transient symptoms of synchronous neuronal activity in the brain. Epilepsy affects more than 50 million people worldwide [1]. In Australia, over 225,000 people live with epilepsy [2]. Seizure prediction allows patients and caregivers to deliver early interventions and prevent serious injuries. Electroencephalography (EEG) has been used to predict seizure onset, with varying success between participants [3,4]. There is an increasing interest to use electrocardiogram (ECG) to help with seizures detection and prediction. The aim of this study is to use long-term continuous recordings of EEG and ECG data to forecast seizures.

EEG and ECG data from 7 patients was used for analysis. Data was recorded using 21 EEG electrodes and 3 ECG electrodes by Seer with an ambulatory video-EEG-ECG system. The average period of recording was 95 hours (range 51-160 hours). Data was annotated by a clinician to indicate seizure onset and offset. On average, 4 clinical seizures occurred per participant (range 2-10). EEG and ECG data were bandpass filtered using Butterworth filter (1-30 Hz for EEG, 3-45 Hz for ECG).

A characteristic of a system that is nearing a critical transition is critical slowing, which refers to the tendency of the system to take longer to return to equilibrium after perturbations, measured by an increase in signal variance and autocorrelation [5]. The variance and autocorrelation of EEG and ECG signals were calculated for each electrode in 1 s window for each time point. The autocorrelation value was set to the width of half maximum of the autocorrelation function. The instantaneous phases of variance and autocorrelation signals were calculated at each time point using Hilbert transform. To extract long (1 day) and short (20 s in EEG, 10 min in ECG) cycles in the variance and autocorrelation signals, a moving average filter has been applied. The relationship between seizure onset times and phase of variances and autocorrelation were investigated in long and short cycles. The probability distribution for seizure occurrence was determined for each time point. The seizure likelihood was determined at three levels: low, medium and high, based on two thresholds defined as functions of maximum seizure probability. Data analysis was performed in Python 3.

Results show that the variance and autocorrelation of EEG data increased at the time of seizure onset in 66.7% and 68.3% of cases, respectively. The variance and autocorrelation of ECG data increased at the time of seizure onset in 60% and 50% cases, respectively. Long and short cycles of variance and autocorrelation had consistent results. Result indicate that critical slowing may be present in a neural system during seizures and this feature could be used to forecast seizures.

References

  1. 1.

    Thijs RD, Surges R, O’Brien TJ, Sander JW. Epilepsy in adults. The Lancet. 2019; 393(10172): 689-701.

  2. 2.

    Epilepsy Action Australia [internet]. Facts and Statistics [cited 2020 July 8]. Available from: www.epilepsy.org.au.

  3. 3.

    Cook MJ, et al. Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study. The Lancet Neurology. 2013; 12(6): 563-71.

  4. 4.

    Karoly PJ, Ung H, Grayden DB, Kuhlmann L, Leyde K, Cook MJ, Freestone DR. The circadian profile of epilepsy improves seizure forecasting. Brain. 2017; 140(8): 2169-82.

  5. 5.

    Scheffer M, et al. Early-warning signals for critical transitions. Nature. 2009; 461(7260): 53-9.

P148 Biophysically grounded mean-field models of neural populations under electrical stimulation

Caglar Cakan1, Klaus Obermayer2

1Technische Universität Berlin, Berlin, Germany; 2Technische Universität Berlin, Department of Software Engineering and Theoretical Computer Science, Berlin, Germany

Correspondence: Caglar Cakan (cakan@ni.tu-berlin.de)

BMC Neuroscience 2020, 21(Suppl 1):P148

Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams (Fig. 1) reflecting the network’s state reveal asynchronous up and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description. We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.

Fig. 1
figure55

Bifurcation diagrams depict the state space of the E-I system in terms of the mean external inputs. a-b Mean-field model and AdEx network without adaptation with up and down-states, a bistable region bi (green dashed contour) and an oscillatory region LCEI (white solid contour). c-d With somatic adaptation. The bistable region is replaced by a slow oscillatory region LCaE

P149 Towards large-scale model of sleeping brain: sleep spindles in network mass models

Nikola Jajcay, Klaus Obermayer

Technische Universitat Berlin, Neural Information Processing Group, Berlin, Germany

Correspondence: Nikola Jajcay (jajcay@ni.tu-berlin.de)

BMC Neuroscience 2020, 21(Suppl 1):P149

The hierarchical nesting of sleep spindles and cortical slow oscillations is considered a precursor of successful episodic memory consolidation where it, presumably, sets the stage for memory traces migration from short-term hippocampal storage to longer-lasting neocortical sites [1]. Spindles are thought to be generated in the thalamus and projected onto cortical sites, while slow oscillations originate in the cortex and migrate in wave-like patterns to the thalamus, hence the interplay between the two rhythms is orchestrated within the thalamocortical circuitry. The state-of-the-art mass models of the thalamocortical loop, however, only consider relevant motifs and patterns in sleep but discard network effects. Here we take the first steps towards integrating thalamocortical projections into a large-scale brain model and modelling whole-brain cortical slow-wave activity and thalamic spindles as seen in non-REM sleep.

We model the thalamus as a network node containing one excitatory and one inhibitory mass representing thalamocortical relay neurons and thalamic reticular nuclei, respectively. With little deviations, our model follows the thalamic component developed in [2]. In the thalamic submodule, we investigated its spindling behaviour upon changing, firstly, conductances of rectifying and T-type calcium current, by which the thalamus can be parametrised in three oscillatory regimes: fast oscillations, dominated by Ca current; spindle regimes with a balanced interplay of Ca and rectifying currents, and slow delta oscillations for strong hyperpolarisation. Next, by the application of external excitatory firing rate drive, which simulates excitatory source connected to the thalamus, we found dynamically interesting spindle-promoting regimes in interaction with thalamic conductances (see Fig. 1 for estimated number of spindles), and by changing the parameters or external drive, we were also able to control the inter-spindle interval, spindle duration, and shape of spindle envelope.

Fig. 1
figure56

Various oscillatory regimes for isolated thalamus. Each panel shows estimated number of spindles (color-coded) in 60 seconds-long simulation dependent on excitatory rate drive to thalamocortical relay mass (TCR) and thalamic reticular nuclei (TRN). Different panels encode different conductances for potassium leak current (gLK) and rectifying current (gh)

Secondly, we connected the thalamic node to one cortical node, modelled as interconnected excitatory and inhibitory adaptive exponential integrate-and-fire neuronal masses [3]. The excitatory mass contains a spike-triggered adaptation mechanism by which the node is parametrised to sit in the limit cycle generating slow oscillations by the means of excitation-adaptation feedback loop. We investigated the dynamical repertoire of the connected model concerning the connection strength and network delays. Our preliminary results indeed show that, upon connection, the thalamic spindles are imprinted into cortical node activity

where they modulate slow oscillation envelope, and that slow oscillation activity of the cortex, in turn, shapes spindling behaviour of thalamocortical relay mass by affecting spindle duration and inter-spindle interval. Our connected model also conserves the phase-phase and phase-amplitude couplings reported in the literature on observed EEG data or other thalamocortical models.

Our results suggest that thalamic mass model of spindle activity can be connected to various mass or mean-field models of cortical nodes and, after careful treatment of network connections and delays, we believe that our conclusions would carry over to the large-scale network model.

References

  1. 1.

    Ji D, Wilson MA. Coordinated memory replay in the visual cortex and hippocampus during sleep. Nature neuroscience. 2007; 10(1): 100-7.

  2. 2.

    Schellenberger Costa M, et al. A thalamocortical neural mass model of the EEG during NREM sleep and its response to auditory stimulation. PLoS Computational Biology. 2016; 12(9): e1005022.

  3. 3.

    Cakan C, Obermayer K. Biophysically grounded mean-field models of neural populations under electrical stimulation. PLoS Computational Biology. 2020; 16(4):ce1007822.

P150 The cortical ignition is related to local and mesoscale features of structural connectivity in the connectome of non-human organisms

Samy Castro, Patricio Orio

Universidad de Valparaiso, Centro Interdisciplinario de Neurociencia de Valparaíso, Valparaíso, Chile

Correspondence: Samy Castro (samy.castro@cinv.cl)

BMC Neuroscience 2020, 21(Suppl 1):P150

One way to study the fluctuations of cortical activity is the ignition, i.e. the fast transition from low to high firing rate on cortical regions. The capability of cortical regions to flexibly sustain an ‘ignited’ state of activity has been related to conscious perception and hierarchical information processing. Also, we theoretically showed that the propensity of cortical regions to be ignited is tightly linked to the core-shell structure of the human connectome, i.e. the shells of strongest within-connected subsets of regions [1]. Moreover, the weight of connections (in particular the inputs) has the greatest influence in the propensity of a region to get ignited. Now, using the connectomes of non-human organisms (macaque [2], mouse [3], rat [4] and fruit fly [5]), we assessed whether the relationship between ignition and both local and mesoscale structural organization is maintained in the related organisms. The ignition capabilities of each connectome are obtained from the whole-brain mean-field model, using simulations of the resting-state cortical activity. Then, the structural organization is analyzed using thes-core decomposition for the mesoscale level, and the degree, betweenness centrality and participation index for the local level. The order in which cortical regions are ignited is correlated to both thes-core and the strength of the regions (i.e. hubs) of the different organisms (Fig. 1). Moreover, we found that ignition recruitment is primarily related to weights of the inputs, rather than the outputs, of each region. The local level better explains the region propensity to get ignited in the case of macaque and rat, whereas the mesoscale fits better in the case of the fruit fly and mouse. We suggest that the weighted organization of non-human connectomes, as in the human, operates as a structural principle of ignition rooted in evolution.

Fig. 1
figure57

Ignition recruitment is related to input weights in the local and mesoscale level. A-F The scatter plots of the coupling gain of ignition (x-axis) and the mesoscale level, A smax, B out-smax, and C in-smax (y-axis), and local level, D strength, E out-strength, and F in-strength of each region of the cocomac (blue), rat (orange), mouse (green) and drosophila (red) dataset

Acknowledgements: This work was supported by Fondecyt Grant 1181076 (PO) and the Advanced Center for Electrical and Electronic Engineering (FB0008 CONICYT, Chile)(PO, SC). The Centro Interdisciplinario de Neurociencia de Valparaíso (CINV) is a Millenium Institute supported by the Millennium Scientific Initiative of the Ministerio de Economía (Chile). SC was funded by Beca Doctorado Nacional CONICYT 21140603 and by Programa de Doctorado en Ciencias, mención Neurociencia, Universidad de Valparaíso.

References

  1. 1.

    Castro S, El-Deredy W, Battaglia D, Orio P. Cortical ignition dynamics is tightly linked to the core organisation of the human connectome. bioRxiv. 2020.

  2. 2.

    Bakker R, Wachtler T, Diesmann M. CoCoMac 2.0 and the future of tract-tracing databases. Frontiers in Neuroinformatics. 2012 Dec; 6: 30.

  3. 3.

    Rubinov M, Ypma R, Watson C, and Bullmore E. Wiring Cost and Topological Participation of the Mouse Brain Connectome. PNAS. 2015, 112(32): 10032–10037.

  4. 4.

    Bota M, Sporns O, and Swanson L. Architecture of the Cerebral Cortical Association Connectome Underlying Cognition. PNAS. 2015, 112(16): E2093–2101.

  5. 5.

    Shih CT, et al. Connectomics-based analysis of information flow in the Drosophila brain. Current Biology. 2015; 25(10): 1249-58.

P151 Generalisation of stimulus representation across somatosensory cortex areas in a cellular-resolution photostimulus detection task

Thijs L van der Plas, James M Rowland, Robert M Lees, Adam M Packer

University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, United Kingdom

Correspondence: Thijs L van der Plas (thijs.vanderplas@dtc.ox.ac.uk)

BMC Neuroscience 2020, 21(Suppl 1):P151

Mice use whiskers to explore their environment. Whisker stimulation elicits a neural response in primary (S1) and secondary (S2) somatosensory cortex, two highly interconnected and hierarchically organised brain regions. Their interaction has been related to stimulus detection [1], although its precise functional role remains unclear [2]. Here, we aim to assign a function to this circuit for a stimulus detection task, by assessing how S1-S2 interactions facilitate stimulus perception.

We have conditioned mice to detect 2-photon optogenetic stimulation of random ensembles of S1 cells. This allows us to control the number of stimulated cells on a trial by trial basis, and to separate the initial stimulus representation from the ensuing network response. Simultaneously, we record the calcium activity of both stimulated and unstimulated cells in S1 and S2, rendering an all-optical approach to study neural dynamics [3]. In short, we are able to directly stimulate S1 neurons, hence defining the initial stimulus in S1, while recording the subsequent S1 and S2 neural response.

Mice were conditioned to report the photostimulus by licking a water spout. The task was divided into Go trials, where a varying number (5 - 150) of cells were stimulated, and Catch trials without stimulation. Behavioural accuracy increased as more cells were stimulated (Fig. 1A), indicating that our task operated in the regime of perception.

We observed strongly elevated, sustained neural population activity in both S1 and S2 on successful Go trials (Hits), compared to both unsuccessful Go trials (Misses) and to licking behaviour in the absence of a stimulus (False Positives). This suggests that S1 and S2 encode information during Hit trials that is different from both passive stimulus-induced activity and neural signals driven by movement and reward.

To confirm whether neurons indeed encoded stimulus information, we performed a stimulus decoding analysis on S1 and S2 neural activity separately. We only consider trials where mice licked (i.e. Hits and False Positives), to avoid a behavioural bias. Here, we observe a significant difference between S1 (where the stimulus occurred) and S2 (Fig. 1B): Stimulus information could only be decoded from S2 after a considerable time post-stimulus, while S1 could be decoded directly post-stimulus. Hence, stimulus information has propagated (directly or indirectly) from S1 to S2.

Furthermore, we find a striking dynamic property of information coding in the S1-S2 circuit. Directly post-stimulus at 1s, decoding accuracy in S1 depends on the stimulus strength, the number of stimulated cells (Fig. 1C). However, after a delay of 3s, we find that accuracy has increased, and has become independent of the original stimulus strength (Fig. 1C). S2 decoding accuracy increased equivalently (not shown), even though S2 decoding performs at chance level directly post-stimulus (Fig. 1B).

The stimulus detection task design requires the animals to elicit the same response, independent of stimulus strength. Our results show that the S1-S2 circuit dynamically performs this computation: by propagating stimulus information between S1 and S2, the neural code becomes independent of the original stimulus strength. Hence, we uncover a putative mechanism of how interregional communication can transform stimulus information to facilitate stimulus detection.

Fig. 1
figure58

A Behavioural accuracy (fraction of correct trials) is plotted against the number of photostimulated (PS) neurons Nps (P-values, Wilcoxon). B Decoding accuracy was calculated per time point with logistic regression. Shaded areas indicate std. across 6 mice. Red box indicates the 2-photon stimulus. C Decoded probability of PS in S1 for 2 time points (indicated by triangles in B), split by Nps

References

  1. 1.

    Kwon SE, Yang H, Minamisawa G, O’Connor DH. Sensory and decision-related activity propagate in a cortical feedback loop during touch perception. Nature neuroscience. 2016; 19(9): 1243-9.

  2. 2.

    Ni J, Chen JL. Long‐range cortical dynamics: a perspective from the mouse sensorimotor whisker system. European Journal of Neuroscience. 2017; 46(8): 2315-24.

  3. 3.

    Packer AM, Russell LE, Dalgleish HW, Häusser M. Simultaneous all-optical manipulation and recording of neural circuit activity with cellular resolution in vivo. Nature methods. 2015; 12(2): 140-6.

P152 Face-selective neurons arise spontaneously in untrained deep neural networks

Min Song1, Seungdae Baek1, Jaeson Jang1, Gwangsu Kim2, Se-Bum Paik1

1Korea Advanced Institute of Science and Technology, Department of Bio and Brain Engineering, Daejeon, South Korea; 2Korea Advanced Institute of Science and Techn