- Meeting abstracts
- Open Access
27th Annual Computational Neuroscience Meeting (CNS*2018): Part Two
© The Author(s) 2018
- Published: 29 October 2018
The Correction to this article has been published in BMC Neuroscience 2019 20:4
Gennady Cymbalyuk1, Christian Erxleben2, Angela Wenning-Erxleben2, Ronald Calabrese2
1Georgia State University, Neuroscience Institute, Atlanta, GA, United States; 2Emory University, Department of Biology, Atlanta, GA, United States
Correspondence: Gennady Cymbalyuk (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P200
Supported by NINDS 1 R01 NS085006 to RLC.
Krishnan GP, et al. Electrogenic properties of the Na(+)/K(+) ATPase control transitions between normal and pathological brain states. J Neurophysiol, 2015. 113(9): p. 3356–74.
Picton LD, et al. Sodium Pumps Mediate Activity-Dependent Changes in Mammalian Motor Networks. J Neurosci, 2017. 37(4): p. 906–921.
Picton LD, Zhang H, Sillar KT. Sodium pump regulation of locomotor control circuits. J Neurophysiol, 2017. 118(2): p. 1070–1081.
Kueh D, et al. Na(+)/K(+) pump interacts with the h-current to control bursting activity in central pattern generator neurons of leeches. Elife, 2016. 5.
Zhang HY, Sillar KT, Short-term memory of motor network performance via activity-dependent potentiation of Na+/K+ pump function. Current Biology : CB, 2012. 22(6): p. 526–31.
Tobin AE, Calabrese RL. Myomodulin increases Ih and inhibits the NA/K pump to modulate bursting in leech heart interneurons. Journal of Neurophysiology, 2005. 94(6): p. 3938–50.
P201 Gender differences in intrinsic oscillations of the resting brain following brief mindfulness intervention
Yi-Yuan Tang1, Rongxiang Tang2
1Texas Tech University, Lubbock, TX, United States; 2Washington University in St. Louis, Psychological and Brain Sciences, St. Louis, WA, United States
Correspondence: Yi-Yuan Tang (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P201
Gender differences have been shown in various cognitive domains, brain functions and pathological population. However, the role of gender in responding brief mindfulness intervention remains largely unexplored. We applied fractional amplitude of low-frequency fluctuation (fALFF) to examine gender differences in intrinsic oscillations of the resting brainbefore and after mindfulness intervention. fALFF has been widely used to examine brain differences and abnormalities in healthy and patient populations, it measures the power spectrum intensity of spontaneous brain frequency oscillations and could identify the role of gender-related differences in the resting-state brain activity following intervention. We trained 38 college students (21 males) for 1 month (30 min per session for 20 sessions, 10 h in total). The mindfulness intervention is IBMT which has been used in our series of randomized controlled trails [1–3]. All resting-state fMRI scans were collected at pre- and post- intervention in a 3-Telsa Siemens Skyra. Similar to the procedures of previous literature [2, 4], the time series of each voxel was transformed to a frequency domain after the linear trend was removed without band-pass filtering. The square root was then calculated at each frequency of the power spectrum, and finally the sum of amplitude across 0.01–0.08 Hz was divided by that across the entire frequency range to obtain fALFF. The IBMT fALFF maps before and after the intervention were compared using paired t test. All results were corrected for multiple comparisons (p corrected < 0.05), based on Monte Carlo stimulation. Before intervention there were no significant differences in the brain resting fALFF and behavior (e.g., mood states) between males and females. After the intervention, behaviorally we did not detect any significant difference. However, males and females showed different resting state activities, and specifically, males have higher activity mainly in sensory motor areas, cingulate cortex and insula than females. These results are consistent with previous findings of gender differences in sleep states, cognitive performance and social functioning [5, 6], and indicate that males and females respond to mindfulness intervention differently in the brain resting states. We should take the gender differences into consideration in future intervention studies.
Tang YY, Holzel BK, Posner MI. The neuroscience of mindfulness meditation. Nat Rev Neurosci 2015, 16, 213–225
Tang YY, Tang R, Posner MI. Brief meditation training induces smoking reduction. Proc Natl Acad Sci USA 2013, 110, 13971–13975
Tang YY, et al. Central and autonomic nervous system interaction is altered by short term meditation. Proc Natl Acad Sci USA 2009, 106, 8865–70
Zou QH, et al. An improved approach to detection of amplitude of low-frequency fluctuation (ALFF) for resting-state fMRI: Fractional ALFF. J Neurosci Methods 2008, 172, 137–141
Dai XJ, et al. Gender differences in brain regional homogeneity of healthy subjects after normal sleep and after sleep deprivation: a resting-state fMRI study. Sleep Med .2012, 13, 720
Zhang C, Dougherty CC, Baum SA, et al. Functional connectivity predicts gender: Evidence for gender differences in resting brain connectivity. Hum Brain Mapp 2018, https://doi.org/10.1002/hbm.23950. [Epub ahead of print]
Epaminondas Rosa, Rosangela Follmann
Illinois State University, School of Information Technology, Normal, IL, United States
Correspondence: Epaminondas Rosa (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P202
Synchronization in neurological systems is critical for the survival of many species. Vital functions such as locomotion and mastication, for example, depend upon mechanisms yielding robust and stable neuronal synchronization. Moreover, pathologies including Parkinson’s disease and sleep disorders, for instance, are associated with neuronal synchrony deficiencies rendering patients incapable of leading a normal life. In this study we describe a transition recently observed in computer simulations of gap junction coupled neurons. The transition is mediated by a period-doubling cascade followed by chaos in synchronous neurons, with their firing regimes evolving from tonic (fast repetitive spiking) to bursting (periods of repetitive fast spiking followed by periods of quiescence) as a coupling parameter is increased. While tonic-to-bursting transitions play important roles, for instance, in thalamocortical neurons at sleeping transition states (Sherman, Trends Neurosci, 2001), and in sensory-motor nuclei that generate the typical tremors in Parkinson’s disease (Llinas and Steriade, J Neurophysiol, 2006), little is known about the mechanisms regulating and controlling theses transitions at the level of the dynamic of the individual networked neurons. We use a Hodgkin-Huxley type model neuron (Rosa et al., Biosystems, 2015) to investigate the transition between tonic and bursting neuronal behaviors in small networks of electrically coupled neurons. Numerical simulations show that distinct neurons, one tonic and the other bursting, reciprocally coupled via gap-junctions, depending upon the individual characteristics of the two neurons, may synchronize either in the tonic or in the bursting regime, remaining in the state in which they first synchonized for extended increments in the strength of their coupling. However, we also found that in some cases, the two neurons synchronize initially in the tonic regime and with increased coupling strength, undergo a period doubling bifurcation cascade in route to chaos, go through chaos and then, still in synchrony, go into the bursting regime. Intriguing, we noticed that some peculiar common features of the independent single neurons are preserved when they are coupled and synchronized. For example, the characteristic firing rate at the border between tonic and bursting regimes for the individual neuron is passed on to the collective, when pairs of distinct neurons synchronize (Shaffer et al. PRE, 2016). Similar results were obtained for the case of triads of gap-junction coupled neurons (Shaffer et al. Eur Phys Journal ST, 2017).
Maximilian Schmidt1, Rembrandt Bakker2, Kelly Shen3, Gleb Bezgin4, Claus Hilgetag5, Markus Diesmann6, Sacha van Albada7
1RIKEN Brain Science Institute, Wako-shi, Germany; 2Radboud University, Donders Institute for Brain, Cognition and Behavior, Nijmegen, Netherlands; 3Baycrest, Rotman Research Institute, Toronto, Canada; 4McGill University, McConnell Brain Imaging Centre, Montreal, Canada; 5University Medical Center Eppendorf, Department of Computational Neuroscience, Hamburg, Germany; 6Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6), Juelich, Germany; 7Jülich Research Centre, Institute for Advanced Simulation (IAS-6), Juelich, Germany
Correspondence: Sacha van Albada (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P203
Cortical resting-state dynamics is organized on multiple spatiotemporal scales and involves cell-type-specific spike rates, slow and fast fluctuations, clustered inter-area correlations, and inter-area activity propagation. Simulations of large parts of cortex resolving the individual neurons and synapses enable studying how cortical network structure shapes this multi-scale activity, but have been limited by the available computational resources and simulation technology. Developments in the simulation technology of NEST and access to the JUQUEEN supercomputer have enabled us to overcome this barrier, and simulate a network of 32 vision-related areas of macaque cortex with each area represented by a 1 mm2 microcircuit with the full density of neurons and synapses , which avoids distortions due to downscaling . The simulations rely on a recently derived connectivity map for the visual areas of macaque cortex that predicts the connection probability between any two neurons based on their types, areas, and layers . This connectivity map integrates axonal tracing data with predictions from cortical architecture (neuron densities, layer thicknesses), inter-area distances, and neuronal morphologies. In line with models using simplified equations for the individual areas , our model predicts that cortex operates in a metastable state where slow activity fluctuations appear. In this regime, the power spectrum of simulated V1 spiking activity and the distribution of spike rates across V1 neurons agree well with those from parallel spike recordings in lightly anesthetized macaque . Furthermore, the inter-area functional connectivity is similar to that from macaque resting-state fMRI . The simulated neuronal activity propagates across areas mainly in the feedback direction, akin to LFP findings during sleep . A mean-field-based analysis  shows that the order of activations of the areas is strongly associated with local stability properties, such that the most unstable areas are activated first. Our model reconciles microscopic and macroscopic accounts of cortical neural networks and provides a platform for further developments.
The European Union Seventh Framework Programme under Grant Agreement No. 604102 (Human Brain Project, HBP) and the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 720270 (HBP SGA1), the German Research Council (DFG Grants SFB936/A1,Z1, TRR169/A2, and SPP 2041), and Grant JINB33 for computing time on the JUQUEEN supercomputer.
Schmidt M, Bakker R, Shen K, et al. Full-density multi-scale account of structure and dynamics of macaque visual cortex. arXiv 2015, preprint arXiv:1511.09364.
Van Albada SJ, Helias M, Diesmann M. Scalability of asynchronous networks is limited by one-to-one mapping between effective connectivity and correlations. PLoS computational biology 2015, 11(9), e1004490.
Schmidt M, Bakker R, Hilgetag CC, et al. Multi-scale account of the network structure of macaque visual cortex. Brain Structure and Function 2018, 223(3), 1409–1435.
Cabral J, Kringelbach ML, Deco G. Exploring the network dynamics underlying brain activity during rest. Progress in neurobiology 2014, 114, 102–131.
Chu CC, Chien PF, Hung CP. Tuning dissimilarity explains short distance decline of spontaneous spike correlation in macaque V1. Vision research 2014, 96, 113–132.
Everling, S, Babapoor-Farrokhran S, Hutchison RM, Gati JS, Menon RS. Functional connectivity patterns of medial and lateral. J Neurophysiol 2013, 109, 2560–2570.
Nir Y, Staba RJ, Andrillon T, et al. Regional slow waves and spindles in human sleep. Neuron 2011, 70(1), 153–169.
Schuecker J, Schmidt M, van Albada SJ, et al. Fundamental activity constraints lead to specific interpretations of the connectome. PLoS computational biology 2017, 13(2), e1005179.
P204 In the footsteps of learning: Changes in network dynamics and dimensionality with task acquisition
Merav Stern1, Shawn Olsen2, Eric Shea-Brown1, Yulia Oganian3, Sahar Manavi2
1University of Washington, Department of Applied Mathematics, Seattle, WA, United States; 2Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 3University of California, San Francisco, School of Medicine, San Francisco, CA, United States
Correspondence: Merav Stern (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P204
When we learn a new task, changes in our neural activity take place in order to accumulate and act upon relevant information. These changes can appear with different magnitudes in multiple brain areas. To understand the dynamics and ultimately the mechanisms of these changes, we follow mice as they learn to perform a visual change detection task and use wide-field GCaMP signaling to record their neural activity across the dorsal surface of the cortex. We also study random neural network models with cortical-resembling high-level area structures; by iteratively training these networks to perform the task we assess the similarities and differences in the mouse cortex and artificial recurrent networks. We find that initially, during the naïve behavioral stage, the visual cortex alone responds to the changing stimuli. As the learning progresses, frontal areas respond as well, and eventually, at the expert level, the whole mouse cortex responds to task-relevant stimuli. Cortical activity becomes correlated across all areas, and responses in general become more stereotyped with precise temporal dynamics. Moreover, the dimension of this activity decreases as training progresses. Our artificial neural networks show similar learning-related phenomena. All together, we identify three cortex-wide phenomena that emerge during learning of a basic sequential task: task-specific engagement of surprisingly widespread areas across cortex, an increase in the temporal precision and stereotypy of cortical activity, and a reduction of its dimensionality. These phenomena occur in both mouse cortex and in trained, minimally structured artificial neural networks, suggesting that they may recur across many learning systems and posing intriguing questions for further theoretical work.
P205 Implementation of CA1 microcircuits model in NetPyNE and exploration of the effect of neuronal/synaptic loss on memory recall
Ángeles Tepper1, Adam Sugi2, William W Lytton3, Salvador Dura-Bernal3
1Pontifical Catholic University of Chile, Santiago, Chile; 2Universidade Federal do Paraná, Curitiba, Brazil; 3SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States
Correspondence: Ángeles Tepper (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P205
The hippocampus has a major role in learning and memory, spatial navigation, emotional behavior and regulation of hypothalamic functions . Many models of its circuitry have been developed in order to further understand its functions . CA1 microcircuitry has been proposed to be responsible for the heteroassociative declarative memories  and the cycles of storage and recall are supposed to be modulated by theta oscillations  Cutsuridis et al.  modeled the CA1 microcircuitry using NEURON, the leading simulator in the neural multiscale modeling domain. The purpose was to investigate the biophysical mechanisms by which processes of storage and recall of spatio-temporal input patterns are achieved, employing a detailed biophysical representation of the CA1 microcircuitry. The model included five cell types whose functional roles were evaluated in the simulations. Each neuron had a specific morphology, ionic and synaptic properties, connectivity, and spatial distribution that closely followed experimental evidence. The original model was implemented in NEURON using HOC. The deprecated HOC language and the lack of standardization in NEURON makes it hard to understand, reproduce and manipulate and to run parallel simulations. Such a complex data-driven biologically realistic network would benefit from a separation of model parameters and implementation. To address these issues, we re-implemented the model using NetPyNE (www.netpyne.org), a high-level Python interface to the NEURON simulator, which facilitates the development, parallel simulation and analysis of biological neuronal networks . NetPyNE employs a standardized declarative format to describe the model specifications, and can then generate an efficiently parallelized NEURON model. It also provides a large number of analysis functions that enable further exploration of the model and allows exportation to NeuroML, a standard format for computational models. Our NetPyNE implementation is able to reproduce the results of the original model, but using a clean and powerful declarative language, which makes this complex model accessible to a wider community of neuroscientists. Furthermore, we analyse and explore the model in new ways, including connectivity analysis, computation of LFP spectra and information flow. We also perform novel manipulations to elucidate the relation between neuronal and synaptic loss, involved in Alzheimer’s disease, and memory recall performance.
Anand, KS, Dhikav, V Hippocampus in health and disease: An overview. Annals of Indian Academy of Neurology.2012; 15(4), 239
Bezaire MJ, Raikov I, Burk K, Vyas D, Soltesz I Interneuronal mechanisms of hippocampal theta oscillations in a full-scale model of the rodent CA1 circuit. Elife. 2016;5. https://doi.org/10.7554/elife.18566
Treves A, Rolls ET Computational analysis of the role of the hippocampus in memory. Hippocampus.1994;4: 374–391
Hasselmo ME, Bodelón C, Wyble BP A proposed function for hippocampal theta rhythm: separate phases of encoding and retrieval enhance reversal of prior learning. Neural Comput.2002;14: 793–817
Cutsuridis V, Cobb S, Graham BP Encoding and retrieval in a model of the hippocampal CA1 microcircuit. Hippocampus.2010;20: 423–446
Lytton WW, Seidenstein A, Dura-Bernal S, Schurmann F, McDougal RA, Hines ML Simulation neurotechnologies for advancing brain research: Parallelizing large networks in NEURON. Neural Comput.2016
P206 Modular science: Towards online multi application coordination on inhomogeneous high performance computing and neuromorphic hardware systems
Abigail Morrison, Alexander Peyser, Wouter Klijn, Sandra Diaz-Pier
Jülich Research Centre, Institute for Advanced Simulation (IAS-6), Juelich, Germany
Correspondence: Alexander Peyser (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P206
Alexander Peyser et al. NEST 2.14.0, 2017.https://juser.fz-juelich.de/record/838729
Wouter Klijn et al. Arbor: neural network simulator for HPC, 2016.https://eth-cscs.github.io/nestmc/
Alper Yegenoglu et al. Elephant – Open-Source Tool for the Analysis of Electrophysiological Data Sets. 2015.http://juser.fz-juelich.de/record/255984
H Lindén et al. LFPy: A tool for biophysical simulation of extracellular potentials generated by detailed model neurons, 2014.https://www.frontiersin.org/articles/10.3389/fninf.2013.00041/full
Sanz Leon et al. The virtual brain: a simulator of primate brain network dynamics. 2013.https://www.frontiersin.org/article/10.3389/fninf.2013.00010
P207 Characteristic region-specific neuronal plasticity by PrP peptide aggregates in rat organotypic hippocampal slice cultures
Sang Seong Kim
Hanyang University, Department of Pharmacy, Ansan, Republic of Korea
Correspondence: Sang Seong Kim (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P207
Scrapie prion protein (PrPSc), the abnormal conformational isoform of cellular prion protein (PrPC), is tightly associated with prion pathogenesis. The neuronal cell death in the brain is the major pathophysiological consequence of PrPSc aggregates. Growing evidence indicates that brain circuit is important to maintain physiological functions of brain and impairment of a certain circuit across the brain can be translated into malfunction of physiology and behavior, which defines the clinical phenotypes of clinical states. To investigate the impact of PrPSc aggregates on the brain circuit, the amyloidogenic peptide PrP(106–126) derived from PrPC in either aggregated or non-aggregated state was challenged to organ-cultured brain section of wild type mice. The changes occurred in the brain section were monitored at the level of electrophysiology. For the functional connectivity analysis, mutual information were evaluated for each pair of 8 × 8 recording electrodes. For two CSD X = (x_1, x_2, …, x_N) and Y = (y_1, y_2, …, y_N) at each of the two channels under analysis, mutual information(MI) measures the statistical dependence between X and Y . MI is similar in spirit to cross correlation, but is much more general, because MI is capable of capturing the nonlinear dependencies that the cross correlation might have missed. To estimate MI, we use the k nearest-neighbor approach . In our study, Normalized Mutual information (NMI) NMI(X, Y) = (I(X, Y))/(√(I(X, X)) √(I(X, Y))) is measured to scale the results between 0 and 1.
Thomas M, Thomas JA. Elements of Information Theory. UK: John Wiley & Sons, 2012
Kraskov, A, Stögbauer H, Grassberger P. Estimating mutual information. Physical review E 2004, 69(6), 066138.
Hyeonsu Lee1, Woochul Choi1,2, Youngjin Park1, Se-Bum Paik1,2
1Korea Advanced Institute of Science and Technology, Department of Bio and Brain Engineering, Daejeon, Republic of Korea; 2Korea Advanced Institute of Science and Technology, Program of Brain and Cognitive Engineering, Daejeon, Republic of Korea
Correspondence: Hyeonsu Lee (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P209
Processing of sequential information is crucial to encoding various inputs from the real world, thus working memory used to store sequential information in humans is of great interest. A number of studies have reported that subjects better memorize the first and last items in a sequence than the others, and refer to this as the primacy and recency effects [1–2]. However, the underlying mechanisms of these effects are still elusive. Here, we propose a novel model of these features of sequential memory performance, by introducing the concepts of sequential overwrite and non-uniform allocation of memory resources. First, for a precise investigation of sequential memory characteristics, we performed experiments where subjects were to memorize a series of visual patterns. As previously reported, we confirmed that the correct ratio of the first and last stimuli in the sequence was higher than those for the others. To explain this result, we modified the standard resource model with the assumption that memory resources of previous information are partially replaced by newly introduced information (overwrite effect), and that memory performance for each item is proportional to the amount of resources allocated. From this sequential overwrite model with a non-uniform resource allocation from data fit, we could readily explain the observed U-shaped memory performance in human psychophysical experiments. Next, based on our model that sequential overwrite is a key factor of memory performance, we predicted that memory performance can be affected by the modulation of memory overwrite. To test our idea, we designed an experiment where subjects perform sequential memory tasks under three different conditions: correct information, no information, and wrong information about the item numbers are presented before the task. We expected that these three conditions would vary the degree of memory overwrite of the sequential items and that this would affect the performance. As predicted, correct information improved memory performance while wrong information worsened it, and this effect was most significant in earlier sequences where overwrite was stronger. Model parameters fitted to the observed results suggested that the degree of overwrite was significantly different across conditions and well explained the performance. Our model suggests that sequential overwrite and non-uniform allocation of memory resources can explain the origin of the featured U-shape of sequential memory performance. Furthermore, our model suggests a possible mechanism of optimal memory allocation with prior information of the items to memorize.
Hurlstone MJ, Graham JH, Baddeley AD. Memory for serial order across domains: An overview of the literature and directions for future research. Psychological Bulletin 2014, 339
Gorgoraptis N, Catalao RFG, Bays PM et al. Dynamic updating of working memory resources for visual objects. Journal of Neuroscience 2011, 31,
Jaeson Jang1, Min Song1,2, Se-Bum Paik1,2
1Korea Advanced Institute of Science and Technology, Department of Bio and Brain Engineering, Daejeon, Republic of Korea; 2Korea Advanced Institute of Science and Technology, Program of Brain and Cognitive Engineering, Daejeon, Republic of Korea
Correspondence: Jaeson Jang (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P210
In higher mammals, the primary visual cortex (V1) is organized into various maps of visual functions such as ocular dominance, preferred orientation, and spatial frequency. It has recently been reported that the topography of these functional maps are geometrically correlated , such that the contours of the orientation and the spatial frequency maps intersect orthogonally . This may imply an efficient tiling of processing units, but it is still unclear how this systematic organization can develop in the cortex. Here, we introduce our developmental model to suggest that the topography of the functional maps can be seeded altogether from the regularly structured retinal mosaic and that this shared origin results in topographical correlation among the maps. A previous model provides insight into our model, showing that a quasi-periodic orientation map can be seeded by themoiré interference between hexagonal lattices of ON and OFF retinal ganglion cells (RGCs) . The key assumption was that the orientation tuning of a V1 neuron can be predicted bythe localalignmentof ON and OFF RGCs. This is supported by experimental observations that the structure of cortical functional maps is strongly correlated with the local organization of ON and OFF afferents . Expanding this monocular model to binocular condition,we suggest that the local organization of ON and OFF RGCs in the retinal mosaic can also constrain the ocular dominance and spatial frequency preference. We found that the distance between ON and OFF RGCs could determine the separation of ON and OFF receptive field subregions of the connected V1 neuron, and could also change wiring strength to contralateral and ipsilateral feedforward circuits. With the notion that the ipsilateral connections are later developed to match the orientation preference through two pathways, our model showed that the phase difference between contra- and ipsilateral receptive fields of binocular V1 neurons could induce a preferencefor higher spatial frequency than that in the monocular region . As a result, we successfully reconstructed the orthogonal relationships between orientation, ocular dominance, and spatial frequency maps, as observed in the experimental data [2, 6]. Our results suggest a unified developmental model of various functional maps in visual cortex.
Kremkow J, Jin J, Wang Y, Alonso JM. Principles underlying sensory map topography in primary visual cortex. Nature 2016, 533 (7601).
Hübener M, Shoham D, Grinvald A, Bonhoeffer TJ. Spatial relationships among three columnar systems in cat area 17. Journal of Neuroscience 1997, 17, 9270–9284
Issa NP, Trepel C, Stryker MPJ. Spatial frequency maps in cat visual cortex. Journal of Neuroscience 2000, 20, 8504–8514
Nauhaus I, Nielsen KJ, Callaway EM. Efficient Receptive Field Tiling in Primate V1. Neuron 2016, 91, 893–904
Nauhaus I, Nielsen KJ, Disney A, Callaway EM. Orthogonal micro-organization of orientation and spatial frequency in primate primary visual cortex. Nat. Neurosci. 2012, 15, 1683–1690
Paik SB, Ringach DL. Retinal origin of orientation maps in visual cortex. Nat. Neurosci. 2011, 14, 919–925
Youngjin Park1, Se-Bum Paik1,2
1Korea Advanced Institute of Science and Technology, Department of Bio and Brain Engineering, Daejeon, Republic of Korea; 2Korea Advanced Institute of Science and Technology, Program of Brain and Cognitive Engineering, Daejeon, Republic of Korea
Correspondence: Youngjin Park (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P211
The basolateral amygdala (BLA) is known to be a core brain region for emotional function, such as fear memory. Recently, it was reported that observation of a memory ensemble in the BLA revealed unusual neural activities, different from the predictions of the standard Hebbian model (Blair 2001) of synaptic plasticity. Grewe et al. examined neural ensembles for conditioned stimulus (CS) and unconditioned stimulus (US) in the BLA during fear conditioning, and found that the dynamics of the individual neurons observed was contradictory to the global tendency. According to the data, the CS ensemble came to resemble the US ensemble more during the learning, but the activity of individual neurons that simultaneously receive CS and US input tended to decrease. Moreover, only a small portion of the cells with potentiated CS responses were responsive to the US; thus these responses alone cannot explain the global changes in the CS and US ensembles. From this, Grewe et al. concluded that there must be hidden elements, such as a hypothetical neuromodulator, that produce the observed result. Here, we suggest an alternative solution: a hierarchical model with segregated learning and coding layers with standard Hebbian plasticity. Our key idea is that neural populations for information coding and associative learning may be separate. In the previous model for analysis of observed data, it was assumed that learning and coding occur simultaneously in the same neural layer, so bi-directional change in the neural ensembles could not be explained. However, if the output coding layer receives projections from a separate former layer, the observed non-Hebbian behavior of output ensemble might not be paradoxical. To test our idea, we constructed a two-layer feedforward network model for computer simulation. We assumed that the CS and US ensemble patterns were first formed in the input layer, and that their activity patterns were then projected to neurons in the output layer to form the observed CS and US ensemble in the output layer. During the conditioning, we implemented a stochastic change of neural activity in the input CS ensemble following the Hebbian model that neurons that overlap the US ensemble increase their response, and neurons that do not overlap the US ensemble decrease response, both with a constant probability. Under this condition, we could reproduce the experimental observation that the CS/US overlap ratio in the output layer increased. In addition, we also found bi-directional changes in activity within the output layer, similar to those observed in the BLA data. Our result suggests that the observed non-Hebbian ensemble dynamics could originate from the projection of pure Hebbian dynamics, raising an issue about the fundamental organization of memory consolidation circuits.
P212 Data-driven models of interneurons in the somatosensory thalamus and comparison with gene expression data
Elisabetta Iavarone1, Jane Yi1, Ying Shi1, Christian O’Reilly1, Werner Alfons Hilda Van Geit1, Christian A Rössert1, Henry Markram1, Sean Hill2
1École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland; 2University of Toronto & EPFL, Centre for Addiction and Mental Health and Blue Brain Project, Toronto, Canada
Correspondence: Elisabetta Iavarone (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P212
The thalamic reticular nucleus is the major source of inhibition to the thalamus. However, different thalamic nuclei in the rodent brain receive varying degree of inhibition from local interneurons, ranging from 15 to 20% of the neuronal population in the visual thalamus to < 4% in the somatosensory thalamus . Despite the lower abundance of thalamic interneurons compared to excitatory thalamo-cortical (TC) cells, they have been shown to shape visual responses and to dynamically influence the extent of receptive fields . As the morphological and electrophysiological properties of TC cells in first-order thalamic nuclei show high degree of similarity across modalities (e.g., visual and somatosensory systems) , we hypothesized that local interneurons in different sensory circuits have similar cellular and synaptic properties, and explored them with the aid of data-driven computational models, in vitro patch-clamp recordings, and gene expression data. We characterized the properties of mice TC neurons of the ventrobasal (VB) nucleus and local interneurons by applying a standardized battery of electrical stimuli, biocytin staining and 3D morphological reconstruction. We qualitatively classified the passive responses and firing properties into different electrical types (e-types) and validated the classification by extracting electrical features from the voltage traces. We then used the 3D morphologies, electrical features, ionic current kinetics and distribution from experimental findings to constrain multi-compartmental models of the different e-types by using a multi-objective optimization strategy  and validated them with stimuli not used during model building. We complemented our data analysis and modelling pipeline by comparing the modelled e-types with single cell and synaptic properties systematically curated from the neuroscientific literature , along with gene expression data. The result of this analysis suggests that while some thalamic e-types are comparable to interneurons in cortical microcircuits, others are thalamus-specific and comparable to interneurons from the dorsal part of the lateral geniculate nucleus .
Arcelli P, Frassoni C, Regondi MC, et al. GABAergic neurons in mammalian thalamus: a marker of thalamic complexity? Brain Research Bulletin 1997, 42, 27–37.
Heiberg T, Hagen E, Halnes G, Einevoll GT. Different Effects of Triadic and Axonal Inhibition on Visual Responses of Relay Cells. PLoS Computational Biology 2016, 12(5), e1004929.
Sherman SM, Guillery RW. (2006) Exploring the Thalamus and its Role in Cortical Function. Cambridge, MA: MIT Press.
Van Geit W, Gevaert M, Chindemi G, et al. H: BluePyOpt: Leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience. Front. In Neuroinform 2016, 10
O’Reilly C, Iavarone E, and Hill SL. A Framework for Collaborative Curation of Neuroscientific Literature. Front. In Neuroinform 2017, 11, 27.
Leist M, Datunashvilli M, Kanyshkova T, et al. Two types of interneurons in the mouse lateral geniculate nucleus are characterized by different h-current density. Scientific Reports 2016, 6, 24904.
Svetlana Gladycheva, Bailey Conrad, Sean Powell
Towson University, Department of Physics, Towson, MD, United States
Correspondence: Svetlana Gladycheva (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P213
We investigate the role of synaptic connectivity in the cortical network that may lead to a better understanding of autism spectrum disorder (ASD). We have established a measure and studied properties of the integration of distinct stimuli in the cortical network model and investigated effects of network connectivity on this integration. Taken with ongoing experimental optogenetic studies , this model may assist to pave the way toward potential pharmacological targets for the treatment of ASD.
Autism is a neurodevelopmental disorder for which there is no cure. It is characterized by impairments in social cognition and communication. Abnormalities in the ASD brain are not strictly localized, but involve multiple neural networks [2, 3]. Numerous studies suggest a deficit in multisensory integration (MSI) in autism, both in human and animal models [4, 5]. Specifically, it has been shown that ASD patients demonstrate a widened window of audio-visual temporal integration . It has been proposed that deficits in the integration of multisensory cues leading to ASD are likely to be caused by dysfunctional connectivity in the brain [6, 7, 8].
Multisensory integration by neural populations in the cortex has been recently studied in many contexts [9, 10, 11]. Our model is an adaptation of the Traub model  of a single-column thalamocortical network, modified to be used in the GENESIS neuronal simulation environment . The model comprises 14 types of cortical neurons each with its own compartmental morphology and electrophysiological properties connected in columnar structure. We apply pulse train stimuli to the cells at two different locations in the column and measure the local field potential (LFP) to characterize network activity.
Our model demonstrates that the multiple distinct stimuli generate superadditive integrated LFP response, where the combined stimuli from two locations produces a larger response than the sum of the two individual ones. We then use this model to investigate the effect of network connectional parameters on temporal aspects of multisensory integration in the cortex. It is believed that the ASD condition may be associated with widened temporal windows of cortical integration. Existing ASD therapies concentrate on behavioral interventions that reduce symptoms and, to date, no drug therapy exists for ASD that would repair or strengthen brain circuits. With our model’s measure of MSI and cholinergic impairment, it may be possible to gauge the efficacy of pharmacological agents whose action would ameliorate the ASD condition.
Yi, Feng et al. Hippocampal ‘cholinergic Interneurons’ Visualized with the Choline Acetyltransferase Promoter: Anatomical Distribution, Intrinsic Membrane Properties, Neurochemical Characteristics, and Capacity for Cholinergic Modulation, Frontiers in Synaptic Neuroscience 2015, 7, 4.
Muller, RA The study of autism as a distributed disorder, Ment Retard Disabil Res Rev. 2007, 13, 85–95.
Rippon G et al. Disordered connectivity in the autistic brain: challenges for the “new psychophysiology”. International Journal of Psychophysiology 2007, 63, 164–172.
Robertson CE and Baron-Cohen S, Sensory perception in Autism. Nature Reviews Neuroscience 2017, 18, 671–684.
Belmonte MK et al. Autism and abnormal development of brain connectivity. Journal of Neuroscience 2004, 24, 9223–9231.
Anagiustou E et al. Review of neuroimaging in autism spectrum disorders: what we have learned and where we go from here. Molecular Autism 2011, 2, 4.
Wass S. Distortions and disconnections: disrupted brain connectivity in autism. Brain and Cognition 2011, 75, 18–28.
Stevenson RA et al. Identifying and quantifying Multisensory Integration: a Tutorial. Review Brain Topography 27, 2014, 707–730.
Fetsch CR et al. Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons. Nature Reviews Neuroscience 2013, June 14(6).
Ursino M et al. Neurocomputational approaches to modeling multisensory integration in the brain: a review. Neural Networks 2014, 60, 141–165.
Traub RD et al. Single column thalamocortical network model exhibiting gamma oscillations, sleep spindles and epileptic bursts. Journal of Neurophysiology 2005 Apr, 93(4): 2194–232.
Boothe DL et al. Impact of neuronal membrane damage ona local field potential in a large scale simulation of the neuronal cortex. Frontiers in Neurology 2017, 8, 236.
Samira Abbasi1, Dieter Jaeger2, Selva Maran2
1Hamedan University of Technology, Biomedical Engineering, Hamedan, Islamic Republic of Iran; 2Emory University, Department of Biology, Atlanta, GA, United States
Correspondence: Dieter Jaeger (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P214
Synaptic decoding of neural population activity at the single cell level presents a challenging question. One method to address this question in a rigorous way is to use detailed single neuron simulations with well-defined input patterns to study the input–output function of biophysically realistic neurons. In previous work we developed method to create artificial spike trains (ASTs) that can match spike train properties of cerebellar Purkinje cells in order to study the cerebellar cortical-nuclear signal transformation . Here we generalize this method to create well defined artificial spike trains (ASTs) made from templates of different types of recorded neurons and further test the method with surrogate data. The basic idea of our method is to use recorded neurons to construct rate templates of their activity using gaussians. Then we can draw gamma distributed spike trains from these rate templates to obtain ASTs with different regularity properties. We can scale templates to different firing rates, add a refractory period to the gamma distributions, and add well defined rate-correlations between multiple ASTs by manipulating the rate template. Here we first tested our method with constant rate templates, sinusoidal rate templates, and zap rate templates. We find that slow rate fluctuations (~ 1 Hz) can be well captured by individual ASTs, but that faster rate fluctuations require a population average of ASTs to recapture the rate template. The ability to capture faster rate fluctuations is a function of the regularity (kappa parameter of the gamma distribution) and the rate of the ASTs that are being generated. These properties parameterize fundamental limits of coding rate fluctuations with noisy spike trains. We then use pyramidal neuron and mossy fiber recordings from the cerebellar to test our algorithms for real data beyond the fast firing Purkinje cell populations previously used. Unlike cerebellar Purkinje cells which exhibit high firing rate and more regular spike trains, most pyramidal neurons and mossy fibers exhibit low firing rates and highly irregular and bursty firing pattern. In spite of these differences in firing rate and pattern, ASTs created using our method were able to match the statistical properties of spike trains in both these cell types. The ability to re-create original rate templates from such ASTs was limited by the same features as seen for our surrogate rate templates, and reveal limitations as to how faithfully low rate/high variability spike trains can communicate a rate code.
Abbasi S, et al. Robust transmission of rate coding in the inhibitory Purkinje cell to cerebellar nuclei pathway in awake mice. PLoS Computational Biology 2017, 13, e1005578.
Vergil Haynes1, Sharon Crook2
1Arizona State University, College of Mathematical and Statistical Sciences, Tempe, AZ, United States; 2Arizona State University, School of Life Sciences, Tempe, AZ, United States
Correspondence: Vergil Haynes (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P215
Understanding the contributions of different layers to cortical processing within the visual and somatosensory systems has led to testable hypotheses about how multiple simple sensory features are combined within cortical columns of these areas. This understanding provides a concrete basis for bridging high-level models of information processing with proposed neurobiological components and recordings of neuronal activity. To complement these studies, insight into cortical oscillations provides a convenient framework for understanding population processing of sensory features. In particular, auditory cortex demonstrates an organizational hierarchy of rhythmic activity and these rhythms effect stimulus encoding . Previous studies further implicate phase-resetting and reciprocal interlaminar interactions in feature selection [2, 3]. Large-scale computational modeling studies of sensory systems have focused primarily on visual and somatosensory cortices. Despite recent advancements in characterizing anatomical and physiological properties of the auditory system, few models reconstruct fundamental differences between the auditory system and other modalities. Here we present a model that incorporates some features unique to auditory cortex and replicates various statistical response properties of a non-primary auditory area . The model is a modification of a biologically realistic model of a thalamocortical network with multiple layers  converted for broader usage . Simulations were performed within the NEURON simulation environment  using NetPyNe  for model handling and analysis.
The model provides predictions about how convergent inputs carrying distinct information are processed within a thalamocortical network and demonstrate relationships between laminar cortical oscillations and interlaminar processing of convergent inputs. We also investigate how phase-resetting reorganizes laminar processing and affects feedforward and feedback outputs. Our model simulations assume auditory processing relies on a hierarchical network structure. Feedforward inputs from earlier auditory populations (in area A1) are provided to the network based on statistical response patterns of multi-unit activity to a repertoire of auditory stimulation found in the same literature used to replicate non-primary auditory responses . The model is modified and tuned to replicate the response properties of later areas under similar stimulation protocols. As the model uses a biophysically based multicompartmental formalism, we demonstrate how convergent extrinsic inputs shape local population activity reflected in both population firing and local field potentials.
Lakatos P, Shah AS, Knuth KH, et al. An Oscillatory Hierarchy Controlling Neuronal Excitability and Stimulus Processing in the Auditory Cortex. Journal of Neurophysiology 2005, 94(3)
Guo W, Clause AR, Barth-Maron A, Polley DB. A Corticothalamic Circuit for Dynamic Switching between Feature Detection and Discrimination. Neuron 2017, 95(1), 180–194.
Carracedo LM, Kjeldsen H, Cunnington L, et al. A Neocortical Delta Rhythm Facilitates Reciprocal Interlaminar Interactions via Nested Theta Rhythms. Journal of Neuroscience 2013, 33(26), 10750–10761
Traub RD, Contreras D, Cunningham MO, et al. Single-column thalamocortical network model exhibiting gamma oscillations, sleep spindles, and epileptogenic bursts. Journal of Neurophysiology 2005, 93(4), 2194
Kajikawa Y, de le Mothe LA, Blumell S, et al. Coding of FM sweep trains and twitter calls in area CM of marmoset auditory cortex. Hearing Research 2008, 239, 107–125.
Gleeson P, Steuber V, Silver RA. neuroConstruct: a tool for modeling networks of neurons in 3D space. Neuron 2007, 54(2), 219–235.
Carnevale NT, Hines ML. The NEURON Book. MA: Cambridge University Press, 2006.
Lytton WW, Seidenstein AH, Dura-Burnell S, et al. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON. Neural Computation 2016, 28(10), 2063–2090.
Justas Birgiolas1, Richard Gerkin1, Sharon Crook2
1Arizona State University, School of Life Sciences, Tempe, AZ, United States; 2Arizona State University, School of Mathematical and Statistical Sciences, Tempe, AZ, United States
Correspondence: Justas Birgiolas (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P216
A channel model implemented in NeuroML can be converted to a wide range of formats and included in larger cell and network models, regardless of the original simulator used to implement the channel model. This work allows modelers to rapidly locate and visually inspect the dynamical properties of NeuroML channels and assess their suitability for inclusion in larger models. In ongoing work, we are utilizing the cell stimulation protocols  from the Allen Brain Atlas project to characterize the responses of NeuroML cell models. The cell protocols include ramp, step, threshold, and pink noise current injections and assess properties such as resting voltage, rheobase and threshold currents for all NeuroML cell models in NeuroML-DB, as well as quantify model run time complexity.
JB was supported by NIH grant F31DC016811 to JB and JB, SC, and RG were supported by NIH R01MH1006674 to SC.
Birgiolas J, et al. Proc 27th Int Conf Sci Stat Db Mgmt 2015, 37.
Podlaski WF, Seeholzer A, Groschner LN, et. al. ICGenealogy: Mapping the function of neuronal ion channels in model and experiment.bioRxiv2016. https://doi.org/10.1101/058685.
McCormick DA, Wang Z, Huguenard J. Neurotransmitter control of neocortical neuronal activity and excitability. Cereb Cortex 1993, 3(5), 387–98.
Allen Cell Types Database (Oct. 2017 v.5) Electrophysiology.http://help.brain-map.org/display/celltypes/Documentation.
Richard Gerkin1, Russell J. Jarvis1, Sharon Crook2
1Arizona State University, School of Life Sciences, Tempe, AZ, United States; 2Arizona State University, School of Mathematical and Statistical Sciences, Tempe, AZ, United States
Correspondence: Richard Gerkin (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P217
Computational models of biological systems are rarely formally tested for agreement between model output and experimental data. SciUnit, a software framework for model validation, facilitates such rigorous testing. During model development, models can be continuously subjected to data-driven “unit tests” that quantitatively summarize model-data agreement, identifying modeling progress and highlighting output that fails to adequately reproduce observed data from the corresponding biological system. The OpenWorm Project is an international open-source collaboration to create a multiscale model of the organism C. elegans. At every scale, including subcellular, cellular, network, and behavior, this project employs one or more computational models that aim to recapitulate the corresponding biological system at that scale. This requires that the simulated behavior of each model be compared to experimental data both as the model is continuously refined and as new experimental data become available. We present data-driven OpenWorm model validation usingSciUnitat three model scales: 1) ion channels, 2) neurons, and 3) whole organism motor output. This workflow is publicly visible and accepts community contributions to ensure that modeling goals are transparent and well-informed. Model validation tests are executed continuously as the models are updated and refined, ensuring that development converges towards the ultimate design specification: agreement with the underlying biological system.
Benjamin Cohen, Carson Chow, Shashaank Vattikuti
National Institute of Health, NIDDK, Lab of Biological Modeling, Bethesda, MD, United States
Correspondence: Benjamin Cohen (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P218
Perceptual rivalry is the subjective experience of alternations between competing percepts when an individual is presented with an ambiguous stimulus. Mutual inhibition between pools of neurons encoding different interpretations of the stimulus is thought to underlie this disambiguation computation, where activity in one pool dominates and the corresponding percept is represented. A canonical cortical circuit model with mutual inhibition and fatigue can explain normalization, winner-take-all, rivalry, and various findings from psychophysics experiments . However, this approach has yet to incorporate realistic spiking statistics.
Meanwhile, balanced state theory has been used to explain why cortical neurons fire irregularly . Researchers have modeled computing orientation selectivity and working memory in balanced networks, but competitive networks have only recently been investigated . A recent study showed that alternations resembling rivalry result from random networks receiving stochastic but competitive inputs . Here we explore a model of perceptual rivalry with realistic spiking. First, we show that normalization, winner-take-all, and rivalry behaviors can coexist with a realistic asynchronous-irregular state. Next, we compare the psychophysical properties of this model to those of a random network with stochastic input. Our model can explain Levelt’s second and fourth propositions, a gamma distribution of dominance times, and can maintain a coefficient of variation of dominance times which is stable across changes in the input. By contrast, a random network cannot explain Levelt’s fourth proposition, and does not reproduce a gamma distribution of dominance times.
Vattikuti S, Thangaraj P, Xie HW, et al. Canonical Cortical Circuit Model Explains Rivalry, Intermittent Rivalry, and Rivalry Memory. PLOS Computational Biology 2016, 12(5), e1004903
van Vreeswijk C, Sompolinsky H. Chaotic Balanced State in a Model of Cortical Circuits. Neural Computation 1998, 10: 1321–1371
Shaham N, Burak Y, Slow diffusive dynamics in a chaotic balanced neural network. PLOS Computational Biology 2017, 13(5): e1005505
Rosenbaum R, Smith MA, Kohn A, et al. The Spatial Structure of Correlated Neuronal Variability, Nature Neuroscience 2017, 20(1), 107–114
P219 An ensemble modeling approach to identifying cellular mechanisms in thoracic sympathetic neurons
Kun Tian1, Astrid Prinz1, Michael McKinnon2, Shawn Hochman2
1Emory University, Department of Biology, Atlanta, GA, United States; 2Emory University, Department of Physiology, Atlanta, GA, United States
Correspondence: Kun Tian (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P219
Thoracic sympathetic postganglionic neurons (tSPNs), innervated by preganglionic neurons in the spinal cord, are the last common motor output of the sympathetic nervous system, and directly control the vasculature and other internal organs. Dysfunction of tSPNs, such as hyperexcitability, has been observed after spinal cord injury, yet little is known about the cellular mechanisms that drive the excitability of tSPNs.
Combining electrophysiological data with computational modeling, we built the first physiologically-realistic single neuron model of tSPN in mice, and elucidated several cellular mechanisms that govern the tSPN dynamics. For example, we found that the post-inhibitory rebound that has been observed in tSPNs ex vivo was induced by the sodium and potassium currents (INaand IKd) instead of the T-type calcium current. We also found that both the M-type potassium current (IM) and the calcium-dependent potassium current (IKCa) were necessary to replicate the spike rate adaptation. Together, we reproduced all the essential features of tSPNs ex vivo with eight types of ionic currents, which are INa, IKd, IM, IKCa, a fast transient potassium current (IA), a persistent calcium current (ICaL), a hyperpolarization-activated inward current (Ih), and a leak current (IL). Using this single neuron model, we employed an ensemble modeling approach to build a database of physiologically-realistic tSPN models [1–3], which enables a more comprehensive and rigorous examination of the range of tSPN responses to various synaptic inputs. Overall, this work lays the foundation to examine both the recruitment principles of synaptic inputs at tSPNs and the dysfunction of tSPNs after spinal cord injury in the future.
Ensemble modeling was performed on the Neuroscience Gateway Portal . This work is supported by the CMBC Interdisciplinary Neuroscience Pilot Research Fund at Emory University.
Prinz AA. Computational approaches to neuronal network analysis. Biological Sciences 2010, 365:2397–2405.
O’Leary T, Sutton AC, Marder E. Computational models in the age of large datasets. Current Opinion in Neurobiology 2015, 32:87–94.
Gao P, Ganguli S. On simplicity and complexity in the brave new world of large-scale neuroscience. Current Opinion in Neurobiology 2010, 32:148–155.
S Sivagnanam, A Majumdar, K Yoshimoto, V Astakhov, A Bandrowski, M. E. Martone, and N. T. Carnevale. Introducing the Neuroscience Gateway, IWSG, volume 993 of CEUR Workshop Proceedings, CEUR-WS.org, 2013
Xiaoxuan Jia, Joshua Siegle, Gregg Heller, Séverine Durand, Shawn Olsen
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Xiaoxuan Jia (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P220
The mammalian visual cortex is composed of multiple areas that are organized in a hierarchical structure, with feedforward, feedback and horizontal connections. The bottom-up convergent connections generate larger spatial receptive fields and longer temporal integration windows at higher levels of the visual hierarchy. While the structure of these anatomical connections is relatively fixed on short timescales, the structure of functional interactions can rapidly change conformation due to changes in external stimuli and internal brain states. This flexibility is critical for selectively routing signals for perception, cognition, and behavior. Therefore, understanding how neurons form functional networks is fundamental for deciphering brain functions. Studies that investigate functional networks with resting-state fMRI can image the entire brain at once, but, due to the low temporal resolution of this method, they fail to uncover network dynamics at fast timescales that are important for many aspects of perception and decision-making. Studies that attempt to measure functional connectivity with electrophysiological recordings have typically been restricted to recordings from two brain areas at a time, with a limited number of simultaneously neurons in each dataset. Therefore, novel methods are needed for recording large populations of neurons with sufficient temporal resolution to study dynamic functional connectivity at a large scale. Here we make use of the newly developed Neuropixels probe, which contains 384 densely arranged recording sites along a linear shank. We built a platform to simultaneously record from 6 independent probes inserted in the mouse visual cortical areas including primary visual cortex (V1) and 5 higher-order visual cortical areas (LM, RL, AL, PM, and AM). The linear probes are inserted across the layers of the cortex in head-fixed awake mice. The high yields of the Neuropixels probe allow us to record simultaneously from more than 700 well-isolated neurons distributed across cortical layers and areas in the visual cortex of a single mouse. To maximize the probability of finding mono-synaptic functional connections, we mapped the retinotopy of each area with intrinsic signal imaging and specifically target regions with overlapping visual fields. This targeting is validated by receptive field mapping. To compare functional networks under the context of different sensory inputs, we studied activity during drifting gratings and natural movies, in addition to comparing with spontaneous non-stimulus driven activity. We used two methods to measure functional connectivity within the visual cortical network: fine timescale pairwise cross-correlogram (CCG) analysis and Granger causality analysis, both of which can reveal functional relationships of recorded neurons. We found that both the proportion of effective connections and the strength of the functional connection decay as a function of receptive field separation. The time delay of effective connections revealed layer-specific functional sub-networks, based on cortical layers estimated from current source density analysis. We also observed significant differences in functional connectivity between gratings, movies, and spontaneous activity. In sum, our platform provides a unique opportunity to directly study millisecond-timescale functional networks across 6 highly interconnected cortical areas.
Matthew Singh1, Todd Braver2, ShiNung Ching3
1Washington University, St. Louis, Department of Neuroscience, St. Louis, MO, United States; 2Washington University, St. Louis, Department of Psychology, St. Louis, MO, United States; 3Washington University, St. Louis, Electrical and Systems Engineering, St. Louis, MO, United States
Correspondence: Matthew Singh (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P221
P222 Revisiting efficient coding of natural sounds in the environment: unsupervised learning or task-based optimization?
Hiroki Terashima, Shigeto Furukawa
NTT Communication Science Laboratories, Sagamihara, Japan
Correspondence: Hiroki Terashima (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P222
Efficient coding has been a leading computational principle for the sensory neuroscience. Following the visual system, Lewicki  argued that the auditory periphery can be explained by unsupervised learning of natural sounds. One of the study’s claims is that the basis optimized to code human voice resembles the auditory nerve fibres, whose filter sharpness distribution is preserved across mammals. We were able to reproduce the matched distribution by applying the same algorithm to clean recordings of human voices. However, we also found that an efficient code for human voices recorded in the natural environment shows much sharper tuning than the auditory nerve fibres, even though the environment recording is closer to the sensory signal our ears receive and more natural than a studio recording. Our analysis showed that the waveforms are distorted on a short time scale comparable to the time window of the auditory nerve filters and that the mismatch can be reproduced by simulating environmental reverberations, suggesting that the primary factor in the mismatch is the environmental reverberations. How can we better model the auditory periphery including the environmental modulations? Inspired by a recent work on the visual hierarchy , we hypothesized that the auditory periphery is optimized to perform auditory tasks we face in the natural environment instead of unsupervised learning of the entire incoming signal. To test this, we built a deep convolutional neural network that receives reverberated waveform inputs. As a naturalistic task related to voices on a short time scale comparable to the time window of the auditory periphery, we chose phoneme classification. Waveforms and phone labels were taken from the TIMIT database. Each input had the length of 2000 data points, with the target phoneme at the centre. The input waveforms were convolved with an impulse response randomly chosen from a database . After the training, the waveform filters learned in the first layer showed characteristics similar to the auditory nerve fibres, whereas a normal efficient code for the same input did not. This result does not depend on the speech dataset, since we could reproduce a qualitatively similar result by applying a different task, that is, classification of environmental sound recordings. Overall, the results suggest that the auditory periphery efficiently encodes task-related information in a reverberation-resistant manner rather than the entire incoming signal and that our understanding of sensory systems in the natural environment, not only the visual system, can be furthered by using a framework of task-based optimization.
Lewicki MS. Efficient coding of natural sounds. Nature Neuroscience 2002, 5, 356–363.
Yamins DLK, Hong H, Cadieu CF. Performance-optimized hierarchical models predict neural responses in higher visual cortex. PNAS 2014, 111(23), 8619–8624.
Traer J, McDermott JH. Statistics of natural reverberation enable perceptual separation of sound and space. PNAS 2016, 113(48).
P223 Emergence of auditory-system-like representation of amplitude modulation in a deep neural network trained for sound classification
Takuya Koumura, Hiroki Terashima, Shigeto Furukawa
NTT Communication Science Laboratories, Atsugi, Japan
Correspondence: Takuya Koumura (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P223
Grant-in-Aid for Scientific Research on Innovative Areas “Innovative SHITSUKSAN Science and Technology.
P224 Reproducing the cognitive function with the robustness against the brain structure and with the efficient learning algorithm
Yoshihisa Fujita, Shin Ishii
Kyoto University, Graduate School of Informatics, Kyoto, Japan
Correspondence: Yoshihisa Fujita (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P224
Computational modeling of biological neural networks which generate cognitive functions has difficulties compared to artificial neural networks. One major problem is the individual variability of the brain structure which includes extensively different connectivity patterns of neurons for the same function. Another problem is that the widely-used learning algorithms in artificial neural networks such as error backpropagation have high computational costs and are not biologically plausible. Therefore, the models of biological neural networks require both robustness against the network structure and an efficient, plausible learning algorithm. How to achieve highly cognitive functions under these constraints is largely unknown. To tackle this issue, we developed a neural network model based on Extreme Learning Machine (ELM), which includes random and fixed connections. We assumed that this randomness corresponds to the structural variability. ELM utilizes its random connections and thus its learning algorithm is quite simple. Using ELM, we tried to implement the function to recognize words from a string of letters. Since this function requires the process to integrate letters into words, we adopt Vector Symbolic Architecture (VSA), in which the patterns of neural activities for recognized objects are expressed as binary vectors and the integrating process is expressed as a vector operation. We developed a new learning model combining ELM and VSA. Unlike ordinary ELM, the balance between excitatory and inhibitory neurons was crucial in our model, which is consistent with biological findings. Within this balanced condition, it could successfully learn the vocabulary with low computational costs. We used this model to examine how neural representations of misspellings are different from those of correct spellings. Our model can provide a clue how the brain achieves cognitive functions efficiently under the structural variability.
P225 A pipeline for macro-scale connectomics of the common marmoset with global fiber reconstruction from diffusion MRI
Ken Nakae1, Junichi Hata2, Henrik Skibbe1, Alexander Woodward3, Carlos Gutierrez4, Hiromichi Tsukada4, Gong Rui3, Ryo Ito1, Hideyuki Okano2, Shin Ishii1
1Kyoto University, Graduate School of Informatics, Kyoto, Japan; 2RIKEN BSI, Laboratory for Marmoset Neural Architecture, Wako, Japan; 3RIKEN BSI, Neuroinformatics Japan Center, Wako, Japan; 4OIST, Neural Computation Unit, Okinawa, Japan
Correspondence: Ken Nakae (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P225
Common marmosets (Callithrix jacchus) are non-human primates with a small brain size (~ 8 g), mature quickly, and can be genetically manipulated. These features make them suitable for understanding the complex structure and connectome of the primate brain, and provide another animal for inter-species comparison. The complexities of the primate are also shown through individual structure and connectome differences. The RIKEN BSI team obtain diffusion magnetic resonance imaging (dMRI) data of a large number of marmoset brains (~ 50) to construct macro-scale connectomes and reveal the individuality between brains. Using dMRI, we can observe water diffusion phenomena within small voxels (200 μm isotropic) across the brain and use this to estimate the direction of fiber bundles through voxels. Observing the whole brain with dMRI and connecting these estimated fiber directions between adjoining voxels allows us to estimate long-range fiber bundles through the white matter of the brain. Here, we propose a pipeline for obtaining the macro-scale connectome of RIKEN’s marmoset data and analyze their individual differences of the structure and connectome. The pipeline mainly consists of two components: (1) global reconstruction of fibers, and (2) improved parcellation of brain regions by a deep learning technique. (1) Because the fiber structure of the marmoset brain is complex, we often observe multiple and different directions of fiber bundles through a voxel of dMRI. We focus on the Bayesian method for global fiber reconstruction using dMRI (Reisert, et. al. 2011), which can successfully distinguish these different directions in Human dMRI analysis. (2) To make the connectivity matrix between brain regions, we parcellated individual brain regions and counted the number of fiber bundles between ROIs. We developed a structural registration method with the help of a recent deep learning technique for parcellating the white and gray matter (VoxResNet). We estimated the cortex, white matter, subcortical regions and cerebellum in the individual brains using this method. This rough parcellation of the brain can enhance the precise correspondence between the individual and a standard brain with atlas, in which a detailed anatomical structure was defined. We analyzed RIKEN’s data with our pipeline and obtained the individual and average connectivity matrix of the marmoset brain. The graph structure of the connectivity matrices of marmosets show an exponential decay rule (EDR), which states that the strength of connectivity between two regions decays exponentially with respect to the distance between them. This rule is consistent with the graph structure of the connectivity matrices found in studies of other species (mouse and macaque monkey). We evaluated and visualized the individuality of marmoset connectivity matrices using t-SNE; a low-dimensional manifold mapping method. We found that we could discriminate between all individuals through their connectivity matrices, despite variations in the experimental settings of the dMRI and b-values.
Yoshiki Kashimori1, Ryo Tani1, Shiro Yano2
1University of Electro-Communications, Dept. of Engineering Science, Chofu, Tokyo, Japan; 2Tokyo University of Agriculture and Technology, Division of Advanced Information Technology and Computer Science, Tokyo, Japan
Correspondence: Yoshiki Kashimori (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P226
We can recognize rapidly and effortlessly complex visual scenes. Such amazing ability in visual recognition needs the effective processing of visual information along the multiple stages of visual pathways. Neurophysiological experiments have provided evidence for a “simple-to-complex” processing model based on a hierarchy of increasing complex image features, performed along the feedforward pathway of the ventral visual system. On the other hand, visual system has abundant feedback connections, whose number is even larger than the feedforward ones. Li et al.  showed that top-down signals allowed neurons of the primary visual cortex (V1) to engage stimulus components that are relevant to a perceptional task and to discard influences from components that are irrelevant to the task. They showed that V1 neurons exhibited tuning curves modulated depending on the task context. We demonstrated what kinds of top-down signals generate the tuning curves . However, it remains unclear how top-down signals reflecting task behaviors emerge and how they modulate the tuning curves of V1 neurons. To address this issue, we develop a model of visual system that consists of networks of V1 a higher visual area, and a recognition area. We consider one of the perceptual tasks used by Li et al., or bisection task. Neurons of the higher visual area receive the top-down signal reflecting a decision of the task, as well as the feedforward inputs from V1 neuron, and feed the outputs back to V1 neurons. We also use a reinforcement learning to acquire an adaptive behavior to the task. The synaptic weights from neurons of the higher visual area to those in the recognition area were determined by Mirror Decent (MD) method  on the basis of error rate of behavior. The synaptic weights involved in top-down signals in three areas were shaped by Hebbian learning, concurrently with the learning of the feedforward connections. We show here how the feedforward and feedback connections involved in the three areas are formed by the learning of adaptive behavior. Top-down signals are generated concurrently with the acquisition of adaptive behavior. We also show that the tuning modulations of V1 neurons are caused by the change in activity through long-range connections of V1 neurons, elicited by top-down signals from the recognition area to V1 via the higher area. Our model provides the results on the tuning properties of V1 neurons that are compatible with the experimental results by Li et al. These results provide insights into understanding how behavior affects information processing in early sensory areas.
Li W, Piech V, and Gilbert CD, Perceptual learning and top-down influences in primary visual cortex. Nat Neurosci 2004, 13(3): 900–913.
Kamiyama A, Fujita K, Kashimori Y, A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex. BioSystems 2016, 150:138–148.
Miyashita M, Yano S, and Kondo T, Mirror decent and acceleration. 2017, arXiv:1709.02535.
P227 Uncertainpy: A Python toolbox for uncertainty quantification and sensitivity analysis of computational neuroscience models
Geir Halnes1, Gaute Einevoll1, Simen Tennøe2
1Norwegian University of Life Sciences, Faculty of Science and Technology, Aas, Norway; 2University of Oslo, Department of Informatics, Oslo, Norway
Correspondence: Geir Halnes (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P227
Computational models in neuroscience typically contain a number of parameters that are uncertain, either because they vary between cells or dynamically within a cell, or because they are difficult to measure accurately. Uncertainty quantification is a means to quantify the uncertainty in the model output that arise from uncertainty in the model parameters, while sensitivity analysis is the process of quantifying how much of the output uncertainty each parameter is responsible for. Unfortunately, uncertainty quantification and sensitivity analysis are not standard practices in the field of neuroscience, and models are commonly presented without any form of uncertainty quantification. To help alleviate this problem we have created Uncertainpy (https://github.com/simetenn/uncertainpy), an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models. Uncertainpy aims to make it easy for users to perform uncertainty quantification and sensitivity analysis without requiring detailed prior knowledge. The toolbox allows uncertainty quantification and sensitivity analysis to be performed on already existing models, and does not require changes to be made to the model implementation. Uncertainpy primarily bases its analysis on polynomial chaos expansions , which are faster than the more standard Monte-Carlo based approaches. Polynomial Chaos expansions are obtained from the previously developed package Chaospy . Uncertainpy does not merely perform an uncertainty analysis of the “raw” model output (e.g. membrane voltage traces), but is tailored for neuroscience applications by an built-in capability of identifying characteristic features in the model output. Uncertainpy then performs an uncertainty analysis of these features. For example, the toolbox can quantify the uncertainty and sensitivity of salient model response features such as spike timing, action potential width, mean interspike interval, and other features relevant for various neural and neural network models. Uncertainpy comes with several common neuroscience models and features built in, and including custom models and new features is easy. We here present Uncertainpy, and demonstrate its broad applicability by performing an uncertainty quantification and sensitivity analysis of three case studies relevant for neuroscience: the original Hodgkin-Huxley point-neuron model , a multi-compartmental model of a thalamic interneuron  implemented in the NEURON simulator, and a sparsely connected recurrent network model  implemented in the NEST simulator. A preprint of this work is available at bioRxiv .
Xiu, D, Hesthaven JS. High-Order Collocation Methods for Differential Equations with Random Inputs. SIAM Journal on Scientific Computing 2005, 27, 1118–1139.
Feinberg J, Langtangen HP. Chaospy: An open source tool for designing methods of uncertainty quantification. Journal of Computational Science 2015, 11, 46–57.
Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 1952, 117, 500–544.
Halnes G, Augustinaite S, Heggelund P, Einevoll GT, Migliore M. A multi-compartment model for interneurons in the dorsal lateral geniculate nucleus. PLoS Computational Biology 2011, 7, 1–12.
Brunel, N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. Journal of Computational Neuroscience 2000, 8, 183–208.
Tennøe S, Halnes G, Einevoll GT. Uncertainpy: A Python toolbox for uncertainty quantification and sensitivity analysis in computational neuroscience. bioRxiv 2018, 274779.
P228 The emergence of spatiotemporal spike patterns and feature binding relations within a spiking neural network model of the primate visual cortex: a cortical implementation of capsule networks
James Isbister, Simon Stringer
University of Oxford, Department of Experimental Psychology, Oxford, United Kingdom
Correspondence: James Isbister (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P228
The feed forward propagation of visual information in rate-coded neural networks discards information about which low-level features are driving high-level transformation invariant features. In particular, when multiple visual stimuli are present, the network has no way of assigning which low-level features belong to high-level features. Information about the configuration of high-level features, i.e. the spatial composition of the high-level feature in terms of its low-level features, is therefore lost. In visual psychology, this is known as the feature binding problem. Capsule networks  demonstrate a new type of artificial neuron called capsules. The activity of a capsule is represented by a vector rather than a “firing rate” value. The magnitude of the vector represents the probability that its preferred feature is present, whilst the direction of the vector represents the configuration of the feature transform. Capsules therefore provide a simultaneous representation of the presence of a feature and its configuration in terms of lower level features, somewhat analogously to feature binding. But these are not a plausible model of brain function. We show how a spiking neural network can give rise to emergent spatiotemporal spike patterns and feature binding representations. An earlier modelling study  showed how synchronized activity can emerge over a series of layers. Building on this work, we show how incorporating randomized axonal delays leads to the emergence of spatiotemporal patterns of spikes (polychronization). This is an inductive process over a series of layers. Such spike patterns emerge even when the input neurons have randomized spike times. These spatiotemporal spike patterns carry information relating to the hierarchical binding relations between lower and higher features. Our simulations demonstrate that neurons can learn to respond invariantly over a range of transformations of a high-level feature, whilst simultaneously representing the configuration of the feature transform using the precise timings of their spikes. The relative timings between the spikes of neurons representing a high-level feature vary continuously and monotonically as the high-level feature undergoes transformation. Such a representation may be how the brain forms a similar representation to that of capsules and could be part of the brain’s solution to the feature-binding problem. Our spiking network models can also represent the hierarchical binding relations between lower and higher level features through the emergence of binding neurons , which fire if and only if a neuron encoding a lower-level feature is participating in firing a neuron representing a higher-level feature. This implies that the low level feature is part of the high level feature. Such binding neurons develop through visually guided learning with STDP.
Sabour, S. Frosst, N, Hinton GE. Dynamic routing between capsules. In Advances in Neural Information Processing Systems 2017 (pp. 3859–3869).
Diesmann M, Gewaltig M, Aertsen A. Stable propagation of synchronous spiking in cortical neural networks. Nature 1999, 402(6761), 529–533.
Eguchi A, Isbister J, Ahmed N, Stringer SM. (in press) The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system, Psychological Review, in press.
P229 Inhibitory plasticity moulding excitatory spatio-temporal receptive fields in a spiking neural network model
Nasir Ahmad1, Kerry Walker2, Simon Stringer1
1University of Oxford, Department of Experimental Psychology, Oxford, United Kingdom; 2University of Oxford, Department of Physiology, Anatomy and Genetics, Oxford, United Kingdom
Correspondence: Nasir Ahmad (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P229
Excitatory plasticity has long been the focus of learning in spiking neural networks. From the earliest pairwise Spike-Timing Dependent Plasticity (STDP) rules to triplet STDP rules and beyond, excitatory learning rules have been explored both experimentally and theoretically. Inhibitory plasticity has only more recently been appreciated and shows promise in network stabilisation, homeostasis, and predictive coding [1, 2]. Very recently, investigations into the properties of inhibitory plasticity in decorrelating excitatory responses and shaping excitatory synaptic weights have emerged in both experimental and theoretical studies [3, 4]. This study aims to highlight these effects and make the claim that under a given inhibitory plasticity rule , inhibitory spatio-temporal receptive fields play a crucial role in the development of excitatory receptive field structures. Decorrelation of excitatory cells has been identified as a function of inhibitory neurons and plasticity . This description can be misleading. In fact, excitatory neurons in a network with a correlative inhibitory plasticity rule have activity which is decorrelated with their inhibitory input activity. The result is two-fold. First, excitatory receptive fields can only cover those stimuli which are not predicted/explained by their inhibitory inputs. In the case of balance, inhibitory inputs can “explain away” all incoming excitatory stimulation. However, if this balance is incomplete, it leads to excitatory cells responding to only those stimuli which its inhibitory inputs do not cover. The connectivity (and receptive field tuning widths) of inhibitory cells therefore determine the corresponding excitatory cell response characteristics. Curious effects also occur if excitatory and inhibitory cells are active on different input time scales. This is most clear when we consider dynamic stimuli with inhibitory neurons and inhibitory synapses acting on a timescale significantly faster or slower than excitatory cells/synapses. Under these conditions, excitatory cells compete to form a receptive field on the timescale of the incoming inhibition. The excitatory receptive field forms a peak at this timescale (e.g. close to or far from stimulus onset) and learning at all other timescales reflect features that predict activation at that peak.
These effects are studied in a spiking neural network model. In particular, this study shows the emergence of structure in the inhibitory weights (related to the correlation of pre and post synaptic cell responses) and how this affects the emergence of excitatory spatio-temporal receptive fields.
Vogels TP, Sprekeler H, Zenke F, et al. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 2011, 334, 1569–1573.
Boerlin M, Machens CK, Denève S. Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput Biol. 2013, 9, e1003258.
Clopath C, Vogels TP, Froemke RC, Sprekeler H. Receptive field formation by interacting excitatory and inhibitory synaptic plasticity [Internet]. bioRxiv. 2016. p. 066589.https://doi.org/10.1101/066589
Sprekeler H. Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond. Curr Opin Neurobiol. 2017, 43, 198–203.
P230 Learning to be modular: Interplay between dynamics of synaptic strengths and neuronal activity in the brain results in its modular connection topology
Janaki Raghavan1, Sitabhra Sinha2
1University of Madras & The Institute of Mathematical Sciences, Department of Physics, Chennai, India; 2The Institute of Mathematical Sciences, Theoretical Physics, Chennai, India
Correspondence: Janaki Raghavan (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P230
Olaf Sporns, Dante R. Chialvo, Marcus Kaiser, Claus C. Hilgetag, Organization, development and function of complex brain networks, Trends in Cognitive Sciences 2004, 8, 9.
Raj Kumar Pan, Sitabhra Sinha, Nivedita Chatterjee, Mesoscopic organization reveals the constraints governing Caenorhabditis elegans nervous system, 2010, Vol 5, Issue 2, e9240.
Guo-qiang Bi and Mu-ming Poo, Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type, J. Neuroscience 1998, 18 (24) 10464 10472.
Federico Battiston, Vincenzo Nicosia and Vito Latora, Structural measures for multiplex networks, Physical Review 2014, E 89 (3), 032804.
Anand Pathak, Shakti N. Menon, Sitabhra Sinha
The Institute of Mathematical Sciences, Theoretical Physics, Chennai, India
Correspondence: Anand Pathak (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P231
In order to unravel how the brains of higher organisms carry out cognitive and motor functions, it is crucial to understand the structural organization of the neurons at different levels of hierarchy. At an anatomical level, the mammalian brain is compartmentalized into different regions (lobes, gyri, nuclei etc.) Brain regions, each having millions of neurons, are connected to each other through axonal bundles projecting out from their respective neurons. One approach to study the structural connectome of the brain is to consider the brain network at the scale of brain regions. This approach has become possible to implement in brains of higher mammals, e.g., the macaque monkey, for which a large amount of data on different brain areas and their connectivity have been collated in the online repository CoCoMac. Using as our starting point a previous study  that organized the data available from the CoCoMac database of macaque brain connectivity, we have reconstructed an unambiguous and comprehensive brain network with regions covering the entire brain cortex as well as subcortical regions (Fig. 1). Mesoscopic analysis of a network pertains to those substructures that occur at a level much higher than few nodes but at a scale lower than whole network. In a brain network, knowing the mesoscopic organization can characterize its basic structural and functional make up. In particular we consider the modular and hierarchical organization of the macaque brain. Modular networks consist of sub-networks called modules that are densely connected within themselves but have sparse inter-modular connections. A high degree of structural modularity in a brain network reveals functional compartmentalisation and co-ordination among the brain regions. A hierarchical network on the other hand is organized into layers that have dense connections between consecutive layers and relatively sparse connections between non-consecutive layers. Uncovering the hierarchical organization of a brain network not only illuminates the structural and functional compartmentalization but also the directionality of information flow. Analysing these two distinct types of mesoscopic organization of the macaque brain network reveals the larger plan of the macaque brain architecture.
Our analysis reveals that the macaque brain network exhibits a highly modular structure, which in spite of being in accordance with the known functional organization, provides interesting new insights and implications about the functioning of many less studied brain regions. The arrangement of modules is spatially contiguous except one saliently fragmented module that is particularly intriguing and suggestive in its functionality. This study strongly suggests that even though macaque brain connectivity is clearly governed by the spatial configuration of the brain regions, the modular structure is essentially independent of the geometry and shape of the brain, and hence the emergence of modules seems to be a more fundamental attribute. An even more surprising observation is that the macaque brain network also has a highly hierarchical structure. Our new original approach for determining the hidden hierarchical structures in a network opens up a whole range of possible analysis to further understand the structure and function of neuronal networks in general.
Modha D, Singh R. Network architecture of the long-distance pathways in the macaque brain. PNAS 2010, 107(30) 13485–13490
P232 Multimodal modeling of neural network activity: computing LFP, ECoG, EEG and MEG signals with LFPy2.0
Espen Hagen1, Torbjørn V Ness2, Gaute Einevoll2, Solveig Næss3
1University of Oslo, Department of Physics, Oslo, Norway; 2Norwegian University of Life Sciences, Faculty of Science and Technology, Ås, Norway; 3University of Oslo, Department of Informatics, Oslo, Norway
Correspondence: Espen Hagen (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P233
Recordings of extracellular electrical, and later also magnetic, brain signals have been the dominant technique for measuring brain activity for decades. The interpretation of such signals is however nontrivial , as the measured signals result from both local and distant neuronal activity. In volume-conductor theory the extracellular potentials can be calculated from a distance-weighted sum of contributions from transmembrane currents of neurons. Further, given the same transmembrane currents, the contributions to the magnetic field recorded both inside and outside the brain can also be computed . This allows for the development of computational tools implementing forward models grounded in the biophysics underlying the different measurement modalities . LFPy (, LFPy.github.io) incorporated a now well-established scheme for predicting extracellular potentials of individual neurons with arbitrary levels of biological detail. It relies on NEURON (, neuron.yale.edu) to compute transmembrane currents of multicompartment neurons which is then used in conjunction with an electrostatic forward model . We have now extended its functionality to populations and networks of multicompartment neurons with concurrent calculations of extracellular potentials and current dipole moments. The current-dipole moments are used to compute non-invasive measures of neuronal activity, like magnetoencephalographic (MEG) signals [2, 6] and, when combined with an appropriate head-model, electroencephalogram (EEG) scalp potentials. One such built-in head-model is the 4-sphere model including the different electric conductivities of brain, cerebral spinal fluid, skull and scalp [6, 7]. The version of LFPy presented here is thus a true multi-scale simulator, capable of simulating electric neuronal activity at the level of cell-membrane dynamics, individual synapses, neurons, networks, extracellular potentials within neuronal populations and macroscopic EEG and MEG signals. The present implementation is equally suitable for execution on laptops and in parallel on high-performance computing (HPC) facilities. The code is free, open source, and available from GitHub (https://github.com/LFPy/LFPy).
Einevoll GT, Kayser, C, Logothetis NK, Panzeri S. Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat Rev Neurosci 2013. 14:770–785. https://doi.org/10.1038/nrn3599
Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV. Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys 1993. 65:413–487. https://doi.org/10.1103/revmodphys.65.413
Lindén H., Hagen E., Leski S., Norheim E., Pettersen K., Einevoll GT. LFPy: a tool for biophysical simulation of extracellular potentials generated by detailed model neurons. Front Neuroinform 2014, 7(41):1–15. https://doi.org/10.3389/fninf.2013.00041
Hines M, Davison A, Muller E. NEURON and Python. Front Neuroinform 2009. 3(1):1–12. https://doi.org/10.3389/neuro.11.001.2009
Holt G, Koch C. Electrical Interactions via the Extracellular Potential Near Cell Bodies. J Comp Neurosci 1999. 6:169–184. https://doi.org/10.1023/a:100883270
Nunez PL & Srinivasan R. Electric Fields of the Brain. Oxford University Press 2006. ISBN: 9780195050387
Næss S, Chintaluri C, Ness TV, Dale AM, Einevoll GT, Wójcik DK. Corrected Four-Sphere Head Model for EEG Signals. Front Hum Neurosci 2017. 11:490. https://doi.org/10.3389/fnhum.2017.00490
P233 Quantitative comparison of a mesocircuit model with motor cortical resting state activity in the macaque monkey
Michael von Papen1, Nicole Voges1, Paulina Dabrowska1, Johanna Senk1, Espen Hagen2, Markus Diesmann1, David Dahmen1, Lukas Deutz3, Moritz Helias1, Thomas Brochier3, Alexa Riehle3, Sonja Gruen1
1Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Juelich, Germany; 2University of Oslo, Department of Physics, Oslo, Norway; 3CNRS - Aix-Marseille Université, Institut de Neurosciences de la Timone (INT), Marseille, France
Correspondence: Michael von Papen (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P234
Modeling studies of cortical network dynamics frequently aim to include realistic assumptions on structural and effective connectivity [4, 6] to achieve a qualitative reproduction of experimentally observed neuronal activity. Here, we develop a quantitative validation approach where mean-field theory  guides the adaptation of a generic point-neuron network model to macaque motor cortex. We describe the characteristics of the experimental data extracted and used for comparison and present preliminary results for the generic network model. The underlying network model is an upscaled version of the Potjans &Diesmann  layered spiking network model extended to a size of 4x4mm2and a total of ~ 1.2 million leaky integrate-and-fire neurons . In contrast to the original model this mesocircuit model uses lateral distance-dependent connection probabilities derived from cortical neuroanatomical data. To compare the output with observations we subsample single unit activities from the corresponding layer in the simulated network with the same number of neurons and with the same spatial arrangement of the recording array as in the experimental data. The model describes a system in ground, idle or resting state with uncorrelated input. In order to perform a quantitative comparison with experimental data we therefore conducted a resting state experiment with macaque monkeys not given any specific task or stimulus. We recorded neuronal activity from premotor and motor cortex using a chronically implanted 4x4mm2Utah array with 100 electrodes [1, 5]. A video of the monkey was used to differentiate between periods of rest and spontaneous movements.
The experimental single unit activities (~ 140 neurons) are subdivided into putative excitatory and inhibitory neurons based on their spike widths. We find that a) putative inhibitory and excitatory activity is in a balanced state, b) spike counts increase during movement, c) inhibitory units contribute more strongly to firing rate modulations than excitatory units, d) they also tend to be more strongly correlated among each other and e) the dimensionality of cortical activity is decreased during movement. Our results are to a large degree in accordance with mean-field theoretic predictions and may thus allow us to infer constraints on the parameter space of the mesocircuit model.
Brochier T, et al. Massively parallel recordings in macaque motor cortex during an instructed delayed reach-to-grasp task. (Data publication) Scientific Data 2018(accepted)
Dahmen D, et al. Two types of criticality in the brain, arXiv 2017:1711.10930 [cond-mat.dis-nn]
Hagen E, et al. Local field potentials in a 4 × 4 mm2multi-layered network model. CNS-2016, BMC Neurosci. 2016, 17(Suppl 1)
Potjans, Diesmann M. The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cereb. Cort. 2014, 24(3)
Riehle A, et al. Mapping the spatio-temporal structure of motor cortical LFP and spiking activities during reach-to-grasp movements. Front. Neural Circuit 2013, 7(48)
Voges N, Perrinet. Complex dynamics in recurrent cortical networks based on spatially realistic connectivities, Front. Comput. Neurosc. 2012, 6.
P234 Generalized phase resetting and phase-locked mode prediction in biologically-relevant neural networks
Dave Austin, Sorinel Oprisan
College of Charleston, Department of Physics and Astronomy, Charleston, SC, United States
Correspondence: Dave Austin (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P235
Environment stimuli are continuously processed by the central nervous system (CNS) to better adjust, adapt, and learn new responses that optimize our benefits. At neural level, the external stimuli are coded as spikes of electric activity, called action potentials (APs). Neurons respond to changes in the environment by altering their firing speed, or phase, which means that instead of firing at a regular pace the neuron starts firing faster or they slow down. The amount of change, or resetting, in their firing period is determined by the timing, duration, and strength of the external stimulus. However, neurons connect with each other and create large networks capable of elaborated firing patterns that drive the response of the organism. We modeled the neural network as hierarchically-organized layers of neurons and in each layer the neurons’ response is dictated by its own phase resetting behavior. We successfully generalized mathematically and then checked numerically that knowledge of how one isolated neuron responds to a stimulus can help predicting the response of a larger network to complex stimuli.
Peng Gao1, Joe Graham2, Sergio Angulo2, Salvador Dura-Bernal2, Michael Hines3, William W Lytton2, Srdjan Antic4
1UCONN Health, Department of Neuroscience, Farmington, CT, United States; 2SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States; 3Yale University, Department of Neuroscience, CT, United States; 4University of Connecticut Health Center, Department of Neuroscience, Farmington, CT, United States
Correspondence: Peng Gao (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P236
Adam Li1, Marmaduke Woodman2, Viktor Jirsa2, Sridevi Sarma1
1Johns Hopkins University, Biomedical Engineering, Baltimore, CA, United States; 2Aix-Marseille Universite, Institute de Neurosciences, Marseille, France
Correspondence: Adam Li (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P237
Over 20 million people in the world suffer from medically refractory epilepsy (MRE). Approximately 50% of MRE patients have focal MRE, meaning that a small focal region in the brain, the epileptogenic zone (EZ), is the source of the seizures. For patients with focal MRE, treatment by surgical resection of the EZ can be effective, provided the EZ is reliably identified and entirely removed. Identification of the EZ often requires a surgical implantation of subdural grid or stereotactic depth electrodes (SEEG) electrodes, followed by a visual inspection of hundreds of EEG signals during seizure events that occur over several days to weeks. Clearly, surgical outcome relies heavily on precise localization of the EZ. We aim to integrate efforts from computational modeling and data analysis of SEEG recordings to better localize the EZ in an epilepsy patient. The Virtual Brain (TVB) is a computational platform that can integrate patient-specific information, such as brain connectivity derived from MRI and clinician’s EZ hypotheses, to form personalized brain models capable of simulating realistic functional signals (i.e. SEEG). From a data analysis perspective, we used a novel network-based algorithm, coined the “fragility algorithm”, that has demonstrated capabilities of localizing the EZ by analyzing network stability with respect to nodal perturbations. The fragility algorithm, unlike single channel frequency analysis, looks at the SEEG as a network and can efficiently analyze a SEEG network to create a heatmap of predictions on the EZ. The fragility algorithm determines which nodes within the epileptic network (i.e. SEEG channel) are the most fragile, i.e., nodes that if connections from it are perturbed slightly will destabilize the network. Fragility weights for each node are then used to predict the EZ. We built personalized brain models for two temporal focal MRE patients to determine the algorithm’s predictions in two different situations. We simulated for each patient in silico: (1) inside: placement of the EZ at the clinical hypothesis and (2) outside: placement of the EZ outside the resection region. We then applied the fragility algorithm on the simulated and actual SEEG data from each patient to see if the fragility maps of the simulated EZ scenarios resemble the fragility maps derived from actual recordings. With TVB integrated with fragility analysis, we can hypothesize where the true EZ might be for a given MRE patient and whether or not it was correctly identified by clinicians using standard visualization methods. In one patient who had a successful surgery, we assume the EZ lies within the resected region and we found that the predicted EZ in the real SEEG and simulated data as identified by the algorithm, matched the clinically annotated EZ. In contrast, the patient with failed surgery, we assume the EZ lies outside the resected region and we found that the predicted EZ in the real SEEG data and the simulated data does not match the clinical EZ. These results suggest that the failed epilepsy surgery was due to the fact that the EZ was not within the resected region, while in the success case it was. These results outline how personalized brain models can help determine sensitivity of EZ localization algorithms to locations of the EZ and it can integrate with data analysis to validate whether the EZ is properly localized in a surgical resection.
Joseph Knox1, Kameron Decker Harris2, Nile Graddis1, Jennifer Whitesell1, Julie Harris1, Hongkui Zeng1, Eric Shea-Brown3, Stefan Mihalas1
1Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 2University of Washington, Department of Computer Science, Seattle, WA, United States; 3University of Washington, Department of Applied Mathematics, Seattle, WA, United States
Correspondence: Joseph Knox (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P238
Harris KD, Mihalas S, Shea-Brown E. High resolution neural connectivity from incomplete tracing data using nonnegative spline regression. In Proc. NIPS 2016
Laramée ME, Boire D. Visual cortical areas of the mouse: comparison of parcellation and network structure with primates. Frontiers in Neural Circuits 2015
S. W. Oh et al. A mesoscale connectome of the mouse brain. Nature 2014
S. S. Sethi et al. Structural connectome topology relates to regional BOLD signal dynamics in the mouse brain. Chaos 2017
J. M. Stafford et al. Immunosignature system for diagnosis of cancer. Proceedings of the National Academy of Sciences 2014
Wang XJ, Kennedy H. Brain structure and dynamics across scales: in search of rules. Current Opinion in Neurobiology 2016
R. Gămănuţ et al. The Mouse Cortical Connectome, Characterized by an Ultra-Dense Cortical Graph, Maintains Specificity by Distinct Connectivity Profiles. Neuron 2018
Brian Hu, Stefan Mihalas
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Brian Hu (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P238
Ramakrishnan Iyer, Stefan Mihalas
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Ramakrishnan Iyer (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P239
Neuronal responses in early visual cortex are primarily driven by inputs to the classical receptive field and are influenced by stimuli in the receptive field surround. This type of spatial contextual effect is thought to arise due to the statistical structure present in natural scenes, with the surround providing context for the information in the classical receptive field. Lateral connections between neurons in the same cortical area are generally thought to be responsible for transmitting information in the near surround. In a number of recent experimental studies, excitatory neurons have been demonstrated to have like-to-like connectivity with neurons coding for the same feature (e.g. orientation) preferentially connecting to each other with higher probability and/or strength and specific rules for connectivity of inhibitory neuron types have been described. On the other hand, normative models of lateral interactions, relying on sparsity and saliency in the optimal representation of natural images, predict functional inhibition between excitatory neurons. Starting from the assumption that each excitatory neuron represents the probability of a feature being present in the sensory stimulus, we hypothesize that lateral connections serve to optimally (in a Bayesian sense) integrate evidence from the surround. We show that such optimal integration of contextual information can be implemented by a neuronal network. Using natural scene statistics obtained from the Berkeley Segmentation DataSet and in vivo electrophysiological data from awake mouse V1 neurons, we compute the synaptic weights between neurons in our network resulting from the optimal integration of contextual information. We show that this network has like-to-like connectivity between excitatory neurons, in agreement with experimental observations. However, in our model, even neurons with non-overlapping classical receptive fields can have strong connections if they code for features which often co-occur in natural scenes. The distance dependence of connections is similar to those observed experimentally and is found to be heavier-tailed than an exponential decay. These results generalize to other classes of receptive fields including gabors and sharp on–off band-like receptive fields. The network also needs multiple types of inhibition - local normalization, surround inhibition and gating of inhibition from the surround - which we map to the parvalbumin (PV), somatostatin (SST and vasoactive intestinal peptide (VIP) expressing interneuron cell classes respectively. We compared our local circuit model with a correlation-based model, in which the lateral connectivity between cortical neurons is determined by the correlation between their classical receptive fields. We find that the correlation-based model results in a much steeper decay of both like-to-like connectivity and distance dependence, compared to our model. We also show that, compared to a feedforward network, the presence of this local network structure increases the capacity to reconstruct a natural scene from the activities of neurons in the network under noisy conditions. We hypothesize that optimal integration of context is a general computation of cortical circuits, and the local network rules constructed for mouse V1 generalize to other areas and species.
P240 Identifying the constraints and redundancies shaping the retinal code with a deep network simulation
Jack Lindsey, Surya Ganguli, Stephane Deny
Stanford University, Department of Applied Physics, Stanford, CA, United States
Correspondence: Jack Lindsey (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P240
Retinal ganglion cells, which transmit the output of the retina to the brain, typically have a concentric center-surround receptive field and consist of ON and OFF types. Although the retinal responses to natural  and artificial  scenes have been well characterized, it remains unclear how efficient the retinal code is at transmitting visual information. Here we seek to identify the biological constraints of the visual system and the redundancies in natural scene statistics that have shaped the retinal code, by varying respectively the architectural constraints of and statistics of inputs to artificial neural networks trained to classify objects in natural images. We find that when we allow an overcomplete representation in the early layers (i.e. many more neurons in each layer than pixels in the image), the trained network exhibits oriented receptive fields in these layers. However, when we severely restrict the number of neurons in these early layers (guided by the intuition that the optic nerve can only contain a limited number of fibers), concentric center-surround receptive fields emerge. We also find that the response patterns of these neurons naturally cluster in two functional groups analogous to ON and OFF cells in the retina. Moreover, the receptive fields of the first overcomplete layers downstream of our artificial retina are oriented, like receptive fields of V1. Examining the connection weights between the two layers, we find that these oriented receptive fields are generated by drawing from a few center-surround neurons along an axis, mirroring Hubel and Wiesel’s hypothesis about simple cells in visual cortex. These results suggest that the retinal code, with its two main cell populations and its concentric receptive fields, could be optimized to transmit relevant visual information to the brain with a limited number of neurons. This interpretation of the constraints underlying the retinal code is an alternative to that of Karklin and Simoncelli , which suggests that the retinal code is optimized for metabolic and noise constraints. Finally we provide evidence that the utility of center-surround encoding for compression arises from covariances among orientation-selective neuron activations in response to natural scenes.
Baden T, Berens P, Franke K, et al. The functional diversity of retinal ganglion cells in the mouse. Nature 2016, 529(7586):345.
Karklin Y, Simoncelli EP. Efficient coding of natural images with a population of noisy linear-nonlinear neurons. In Advances in neural information processing systems 2011, 999–1007.
McIntosh L, Maheswaranathan N, Nayebi A, et al. Deep learning models of the retinal response to natural scenes. In Advances in neural information processing systems 2016, 1369–1377.
P241 Biophysical modeling of human MEG reveals two mechanisms effected by bandlimited transients in perceiving weak stimuli
Robert Law, Hyeyoung Shin, Shane Lee, Christopher Moore, Stephanie Jones
Brown University, Department of Neuroscience, Providence, RI, United States
Correspondence: Robert Law (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P241
Bandlimited power in cerebral cortex occurs in the form of transient events, a fact often obscured by spectral averaging in classical empirical studies of brain rhythms. Events in the beta band (15–29 Hz) may be locally rhythmic but are brief and sparse, while occurring in scalp-level and mesoscale electrophysiology across diverse neocortical regions in humans, nonhuman primates and rodents. Beta is implicated in nearly every aspect of cortical function, varying during attention, perception and decision-making as well as motor control. However, we presently lack amechanisticmodel that might afford beta events a functional role susceptible to pharmacological, electromagnetic or genetic controls. We are motivated by studies in primary somatosensory cortex (SI), where beta is consistently reported to have suppressive effects on the perception of subsequent tactile stimulation—even a full second after an event occurs. To investigate the underlying mechanisms, we use a geometrically reduced biophysical circuit model consisting of motifs universal to neocortex, which fits both perceptual and mean magnetoencephalographic (MEG) tactile evoked responses (ERs) after event occurrence in SI, while offering unprecedented predictivity of ER features. Previous modeling indicates that beta events are generated by calcium-mediated bursts incident on L1 with one primary source in nonlemniscal thalamus. Recent studies show that such inputs act simultaneously on superficial pyramidal and interneuron subnetworks. In our model, incoming bursts prime pyramidal dendrites while recruiting neurogliaform (NGF) cells either directly or through electrical coupling after interneuron synchronization. Modeling shows how both mechanisms act in sequence, with NGF suppression occurring more often due to the slow timescale of GABABinhibition (> 250 ms), explaining the net behavioral effect. Before NGF is recruited, however, incoming burstsamplifystimuli through subthreshold facilitation - similar to previous oscillation models but with significant improvements in precision. Our model predicts the correct beta phase at stimulus onset in “hit” trials, and also makes two precise predictions of poststimulus bandlimited phase coherences verified at 160–200 Hz at 40 ms (from model L2/3) and 90–110 Hz at 60 ms (from model L5). After this priming phase, L2/3 NGF cells act through GABAB channels on pyramidal somata (in L2/3) and on L5 middle apical dendrites. This mutes L5 pyramidal bursts and clamps L2/3 pyramidal somatic voltages to the potassium reversal. The latter effect is essential for a conspicuous ER feature generated by a completely novel mechanism, where high-voltage downward-propagating dendritic spikes collide with a soma fixed at its minimum voltage. This causes a biophysically maximal local current ideal for driving signaling cations into the soma, raising the possibility that GABAB suppression gates learning after early-prediction error.
Our modeling circumscribes nearly the entire known phenomenology on beta, generating predictions verified at the scalp level with invasively testable analogues. In the process, we unite previously unlinked findings among theoretical, electrophysiological, and anatomical domains. Our results shed particular light on neurogliaform GABABaction in neural computation, with concrete manipulations accessible through a suggestive pharmacopeia including alcohol, opiates, serotonin and neuropeptide Y.
Dhekra Al-Basha1, Milad Lankarany1, Stephanie Ratté1, Steve Prescott2
1The Hospital for Sick Children, Neurosciences and Mental Health, Toronto, Canada; 2University of Toronto & The Hospital for Sick Children, Neurosciences and Mental Health & Dept. Physiology, Institute of Biomaterials and Biomedical Eng, Toronto, Canada
Correspondence: Dhekra Al-Basha (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P242
P243 The degenerate basis for excitability: Interpreting the pairwise correlation of parameter values in randomly generated model neurons with equivalent excitability
Arjun Balachandar1, Steve Prescott2
1University of Toronto, Faculty of Medicine, Toronto, Canada; 2University of Toronto & The Hospital for Sick Children, Neurosciences and Mental Health & Dept. Physiology, Institute of Biomaterials and Biomedical Eng, Toronto, Canada
Correspondence: Arjun Balachandar (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P243
Neurons use action potentials, or spikes, to encode information. Proper neural coding thus relies on the proper control of spike initiation. The spike initiation process reflects the highly nonlinear interaction between different ion channels, which means that subtle variations in ion channel expression or function can dramatically impact excitability. Yet excitability is normally very stable, which raises the question of how excitability is regulated so robustly. Emerging data argue that the biophysical basis for excitability is highly degenerate, meaning that many different combinations of ion channel conductances can yield equivalent excitability. This degeneracy is thought to facilitate the robust regulation of excitability by allowing changes in any one ion channel to be compensated for by changes in many other ion channels. We hypothesized that parameters that are able to compensate for one another will be correlated. We further hypothesized that the strength of pairwise correlations will be weakened as the degree of degeneracy (i.e. the number of parameters that can compensate for one another) increases. To test these hypotheses, large sets of conductance-based Morris-Lecar models were generated with randomly chosen parameter values describing different conductance densities or activation properties. A different number of parameters was allowed to vary for each set of models; all other parameters were held constant at their baseline values. From these sets, we identified model neurons with comparable excitability and, using only those models, determined the pairwise correlation between parameters that had been randomly determined. Correlations were observed between some parameters and, as further predicted, the strength of correlation decreased as the number of randomly varied parameters increased. Based on these results, we expect that highly degenerate systems will exhibit only weak pairwise correlations in their parameter values. Conversely, strong correlations may suggest that a system has only modest degeneracy and that it may, therefore, be less able to compensate in the face of strong perturbations.
Milad Lankarany1, Steve Prescott2
1University of Toronto & The Hospital for Sick Children, Neurosciences & Mental Health, Toronto, Canada; 2University of Toronto & The Hospital for Sick Children, Neurosciences and Mental Health & Dept. Physiology, Institute of Biomaterials and Biomedical Eng, Toronto, Canada
Correspondence: Milad Lankarany (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P244
Multiplexing refers to the simultaneous transmission of multiple signals through a single communication channel. In engineered systems, multiplexing is often implemented by partitioning different signals to different frequency bands (frequency-division multiplexing) or to different temporal epochs (time-division multiplexing). Mounting evidence suggests that the brain also multiplexes but it remains unclear how this might occur. We hypothesized that the brain can form multiplexed representations of first- and second-order stimulus features (i.e. stimulus intensity and abrupt variations therein, such as occur at edges) using spikes that are differentially synchronized across a set of neurons receiving common input. To test our hypothesis, we built a feed-forward neural network comprising Morris-Lecar (ML) model neurons. All neurons received a common mixed input constructed from two distinct signals, slow and fast, plus uncorrelated fast noise. The slow and fast signals represent input from upstream sensory neurons tuned to first- or second-order stimulus features based on their low- or high-pass filter properties, respectively. The two sensory streams converge on the ML model neurons. According to our hypothesis, slow and fast signals are independently encoded by different types of spikes. Specifically, the rate of asynchronous (Async) spikes encode the slow signal whereas the timing of synchronous (Sync) spikes encode the fast signal. To assess the feasibility of the multiplexed coding scheme, we fit linear-nonlinear (LN) rate models to PSTHs from our conductance-based spiking models. In a conventional LN model, input passes through a linear filter and then through a static nonlinearity whose output is firing rate. We constructed a multiplexing LN model with two parallel streams; the same mixed signal is presented to both filters but the output of each filter passes through a different nonlinearity. Unlike the two input streams, which represent input from two differently specialized sets of sensory neurons, the two streams within the LN model represent two operating modes—integration (low-pass filtering) and coincidence detection (high-pass filtering)—used by a set of neurons operating in a hybrid mode. The two-stream LN model more accurately predicted true firing rate (using the PSTH of spiking models as reference) than the one-stream LN model, especially for synchronous spikes. Our results demonstrate that a set of cortical pyramidal cells can implement multiplexing by simultaneously encoding slow and fast features of a mixed signal through a multi-modal filter. These results are further validated experimentally, as presented in our companion poster Multiplexed coding using differentially synchronized spikes: Part 2, experiments.
Yadeesha Deerasooriya1, Géza Berecki2, David Kaplan2, Saman Halgamuge3, Steven Petrou2
1The University of Melbourne, Mechanical Engineering, Melbourne, Australia; 2The University of Melbourne, The Florey Institute of Neuroscience and Mental Health, Melbourne, Australia; 3The Australian National University, College of Engineering & Computer Science, Canberra, Australia
Correspondence: Yadeesha Deerasooriya (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P245
Mathematical modelling significantly contributes to our understanding of the mechanisms underlying neuron and network behaviour. Hodgkin-Huxley (HH) equations are frequently used to model neuron conductances. The majority of the existing HH conductance modelling workflows are based on data acquired under current clamp or voltage clamp (VC) recording conditions. While current clamp provides phenotypically rich information on a neuron’s firing properties, it is often difficult to disentangle the influence of the multiple conductances at play in a real neuron. Isolation of a single conductance by pharmacological or heterologous expression simplifies analysis; however, exhaustive exploration of the kinetic and voltage properties using VC is needed for HH parameterisation that requires weeks of recording time. A more experimental time efficient approach would not only benefit current efforts to develop specific HH models for the entire set of voltage gated channels expressed in the brain, but also free resources to explore these ion channels under pathological or pharmacological conditions. We present an improved HH conductance modelling workflow that uses data derived from dynamic action potential clamp (DAPC) recordings to extract conductance model parameters. A major difference in DAPC over VC recording is that DAPC is an action potential (AP) weighted systems identification approach and is more aligned to the common final use of HH models, which is the building of single neuron and network AP firing models. In this study, first, using fully simulated conditions, we show that with as little as one second of DC recording time, we can produce parameters with an average error of less than 4%. When deployed into simple neuron models, these parameters produced firing rates that approached 100% accuracy in fully simulated experiments. Second, we undertake a real-world test using NaV1.2 channels and show that training our model with five or fewer APs could produce a HH conductance model that predicted the subsequent AP firing with 97% firing rate accuracy. Further, the AP traces overlapped with 94% accuracy. We conclude that DAPC based workflows can be as, or even more, accurate than VC based workflows for extracting HH conductance model parameters. Importantly, this accuracy can be obtained with considerably less recording time and effort providing additional opportunity for exploration of HH conductance equations in different experimental conditions and positioning this approach to have the efficiency to exploit current advances in ion channel genetics and precision drug discovery.
Scott Purdy, Subutai Ahmad
Numenta, Redwood City, CA, United States
Correspondence: Subutai Ahmad (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P246
Our model is compared with two other models: (1) a bag-of-features model that only compares landmarks without locations and, (2) an ideal model that exhaustively examines all environments to find the best match. Using the relative positions of landmarks, our model is able to achieve perfect accuracy when there is little noise and lags the ideal model slightly for very noisy test cases. The bag-of-features model is no better than chance when a small pool of five landmarks is used. Further research will explore the generalization ability of the model and the addition of an unsupervised temporal clustering layer that can reinstate learned relative location representations in order to predict input sensations that have not recently been seen.
Hafting T, Fyhn M, Molden S, et al. Microstructure of a spatial map in the entorhinal cortex. Nature, 2005, 436(7052), 801–806. https://doi.org/10.1038/nature03721
Fiete IR, Burak Y, Brookings T. What Grid Cells Convey about Rat Location. Journal of Neuroscience 2008, 28(27), 6858–6871. https://doi.org/10.1523/JNEUROSCI.5684-07.2008
Sreenivasan S, Fiete I. Grid cells generate an analog error-correcting code for singularly precise neural computation. Nature Neuroscience 2011, 14(10), 1330–1337. https://doi.org/10.1038/nn.2901
Lewis M, Hawkins J. A neural mechanism for determining allocentric locations of sensed features. Cosyne Abstracts 2018, Denver, CO, USA.
Hawkins J, Ahmad S, Cui Y. A Theory of How Columns in the Neocortex Enable Learning the Structure of the World. Frontiers in Neural Circuits 2017, 11(81), 1–18. https://doi.org/10.3389/fncir.2017.00081
P247 Development of direction selectivity via a synergistic interaction between short-term and long-term synaptic plasticity
Nareg Berberian, Matt Ross, Jean-Philippe Thivierge, Sylvain Chartier
University of Ottawa, Department of Psychology, Ottawa, Canada
Correspondence: Nareg Berberian (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P247
S. D. Van Hooser et al. Initial Neighborhood Biases and the Quality of Motion Stimulation Jointly Influence the Rapid Emergence of Direction Preference in Visual Cortex. J.Neurosci 2012
L. Cossell et al. Functional organization of excitatory synaptic strength in primary visual cortex. Nature 2015
J. M. Clemens et al. The Laminar Development of Direction Selectivity in Ferret Visual Cortex. J. Neurosci. 2012
Roberto Legaspi1,2, Taro Toyoizumi1,2
1Laboratory for Neural Computation and Adaption, RIKEN Center for Brain Science, Saitama, Japan; 2RIKEN CBS-OMRON Collaboration Center
Correspondence: Roberto Legaspi (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P248
Sense of agency, i.e., the feeling that oneself caused something to happen, is fundamental to the experience of volition, self-consciousness and responsibility for one’s own actions, and the degradation of this experience characterizes certain psychiatric disorders. Despite its irrefutable significance, the literature still lacks a mathematical exposition of the computational principles that underlie it. We theorize sense of agency as theconfidencein one’s perception of action-outcome effect to be consistent with the hypothesis that the self is thecommon sourcebehind this effect. We adapted the Bayesian inference model of Sato, Toyoizumi and Aihara  that was originally used to explain the ventriloquism effect as a Bayesian estimate of a common source behind the audio-visual stimuli. Formalizing sense of agency by this Bayesian principle distinguishes our theory from existing works. Intentional binding, i.e., the perceived shortening of the time interval between voluntary action and its outcome, has been reported as an implicit measure of sense of agency. Yet, the exact nature of this link is far from comprehension . Our Bayesian model gives a simple coherent account of this link: the shorter perceived interval between the action-outcome timings is more consistent with the causal role of one’s action in producing the immediate outcome, and thus increases the confidence of the Bayesian estimate, modeled as sense of agency. We compared the predictions of our model to the results of two pertinent intentional binding studies. The first follows the seminal experiment reported by Haggard, Clark & Kalogeras  that showed voluntary actions produced intentional binding effects but involuntary actions produced the prolonged opposite perception of the action-outcome interval. The second case follows the study of Wolpe, Haggard, Siebner & Rowe  that investigated the contribution of sensory uncertainty to intentional binding by manipulating the intensity of outcome tones. They showed that when the outcome reliability was reduced, action binding was diminished and tone binding was increased. Our Bayesian psychophysics model reproduces these empirical results based on a computational principle.
Sato Y, Toyoizumi T, Aihara K. Bayesian inference explains perception of unity and ventriloquism after effect: identification of common sources of audiovisual stimuli. Neural Computation 2007, 19, 3335–3355
Moore JW, Obhi SS. Intentional binding and the sense of agency: A review. Consciousness & Cognition 2012, 21(1), 546–561.
Haggard P, Clark S, Kalogeras J. Voluntary action and conscious awareness. Nature Neuroscience 2002, 5(4), 383–385
Wolpe N, Haggard P, Siebner HR, Rowe JB. Cue integration and the perception of action in intentional binding. Experimental Brain Research 2013, 229, 467–474.
P249 On and off responses in auditory cortex may arise from a two-layer network with variable excitatory and inhibitory connections
Shih-Cheng Chien1, Burkhard Maess1, Thomas Knoesche2
1Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; 2MPI for Human Cognitive and Brain Sciences, Department of Neurophysics, Leipzig, Germany
Correspondence: Shih-Cheng Chien (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P249
This study investigates dynamical network models designed to reproduce the electrophysiological responses of the human auditory cortex to mismatch negativity eliciting stimulations. To this end, we first focused on modeling On and Off responses of tonal stimuli via recurrent circuits as they exist in the auditory cortex [1–3]. In the simulation, the recurrent circuits are represented by a two-layer network, where the input stimuli from the thalamus reach the 1stlayer and indirectly affect the 2nd-layer activities through recurrent inter-layer connections. With a stream of stimuli fed to the 1stlayer (input), various types of On/Off responses can be reproduced in the 2ndlayer (observation) given proper inter-layer connections. The simulation results account for relevant properties of cortical On/Off responses and provide thereby clues about the underlying physiological mechanisms. (1) A subtle change in inter-layer connections switches the response type between On, On and Off, and Off. Furthermore, it can also switch between ‘sustained’ and ‘suppressed’ activity during the stimulus presentation. Hence, the diverse On/Off responses observed at different locations in auditory cortex  may reflect diverse inter-layer connections between the input and the observation layer. Interestingly, symmetric inter-layer connections do not give rise to On/Off responses, underlining the importance of asymmetric forward–backward interactions for the change detection function at the cortical level. (2) The distinct onset- and offset-frequency-receptive-fields (FRF) observed in A1 neurons in  can be accounted for by the two-layer scheme. We conclude that the tonotopically organized input layer has distinct equivalent inter-layer connections with the observation layer. (3) Furthermore, the simulation demonstrates that generation of Off response in the 2ndlayer relies on tonic inhibition in the 1stlayer during the stimulus. This nicely matches physiology, as the reduced Off response by NMDA receptor antagonists observed in  is due to the reduced inhibition during the stimulus, because excitatory synapses on inhibitory neurons are more sensitive to the NMDA receptor antagonists . To summarize, the recurrent circuits in our model provide a parsimonious solution for the change detection function at the cortical level. This model for On/Off responses will be further used to reproduce mismatch responses to omitted and deviant stimulus in the oddball paradigms.
Hironori B, et al. Auditory cortical field coding long-lasting tonal offsets in mice. Scientific reports 2016, 6, 34421.
Deneux T, et al. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities. Nature communications 2016, 7, 12682.
Qin L, et al. Comparison between offset and onset responses of primary auditory cortex ON–OFF neurons in awake cats. Journal of Neurophysiology 2007, 97, 5, 3421–3431.
Rujescu D, et al. A pharmacological model for psychosis based on N-methyl-D-aspartate receptor hypofunction: molecular, cellular, functional and behavioral abnormalities. Biological Psychiatry 2006, 59, 8, 721–729.
James Knight1, Alex Cope2, Thomas Nowotny1
1University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom; 2University of Sheffield, Sheffield Robotics, Sheffield, United Kingdom
Correspondence: James Knight (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P250
Various techniques to scale down models of biological neural networks have been developed. However scaling can never preserve all properties of a network, especially not the high number of connections between neurons observed in the brain. Therefore, simulating large-scale biological neural network models remains important and doing so in a reasonable time is one of the major technical challenges in computational neuroscience. Conventionally large-scale simulations are executed on High Performance Computing (HPC) clusters and the tools to distribute neural network simulations across such systems are now relatively mature . However, HPC systems are expensive and not well suited to real-time simulation. Bespoke ‘neuromorphic’ hardware has been developed to address these problems, but they come with their own challenges and limitations.
Brette R, Rudolph M, Carnevalle T, et al. Simulation of networks of spiking neurons: A review of tools and strategies. Journal of Computational Neuroscience 2007, 23(3), 349–398. https://doi.org/10.1007/s10827-007-0038-6
Yavuz E, Turner J, Nowotny T. GeNN: a code generation framework for accelerated brain simulations. Scientific Reports 2016, 6(18854). https://doi.org/10.1038/srep18854
Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral Cortex 2012, 24(3), https://doi.org/10.1093/cercor/bhs358
Djurfeldt M, Hjorth J, Eppler JM, et al. Run-Time Interoperability Between Neuronal Network Simulators Based on the MUSIC Framework. Neuroinformatics 2010, 8(1), 43–60. https://doi.org/10.1007/s12021-010-9064-z
Stone T, Webb B, Adden A, et al. An Anatomically Constrained Model for Path Integration in the Bee Brain. Curr Biol. 2017, 23, 3069–3085. https://doi.org/10.1016/j.cub.2017.08.052
P251 Firing probability for a noisy leaky integrate-and-fire neuron receiving an arbitrary external input signal
Ho Ka Chan, Thomas Nowotny
University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom
Correspondence: Ho Ka Chan (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P251
Fourcaud N, Brunel N. Dynamics of the Firing Probability of Noisy Integrate-and-Fire Neurons. Neural Comput. 2002, 14, 2057–2110
Moreno-Bote R, Parga N. Response of integrate-and-fire neurons to noisy inputs filtered by synapses with arbitrary timescales: firing rate and correlations. Neural Comput. 2010, 22(6), 1528–152.
James Bennett, Thomas Nowotny
University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom
Correspondence: James Bennett (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P252
We first show two problems with this model: (1) it cannot learn reward magnitudes above an upper bound; (2) it learns only when KC-DAN excitation is minimal or absent, in contrast with experiments . We propose a solution, in which D+/D− neurons are instead inhibited by –ve/+ ve reward signals, and in which KC-DAN excitation is required (Fig. 1a). We also derive a plasticity rule for KC-MBON synapses that performs gradient descent on the RPE, and that resembles experimentally observed rules . We call this model the Signed Valence Circuit (SVC). As before, DANs encode RPEs in the signed reward valence (Fig. 1d), and the difference in DAN firing rates, d+-d−, yields the net RPE (Fig. 1e). The SVC can learn rapid changes to reward contingencies in just 5–10 trials (Fig. 1c). In the SVC, D+/D− respectively signal RPEs for –ve/+ ve rewards, so do not actually contribute to learning +ve/−ve valences, counter to experimental evidence . However, in a dual version of this circuit—in which D+/D− are driven by +ve/–ve rewards—D+ no longer signals decrements in –ve rewards, again in contrast with experiments . We therefore combine the SVC and its dual to produce the Signed RPE Circuit (SRC; Fig. 1b), in which the lobes encode the signed RPE of both +ve and –ve reward signals (Fig. 1G). Lastly, the SRC performs well in a traplining task (Fig. 1H–I)—repeating learned routes and minimizing the distance traveled between feeding areas—a behavior exhibited by bees  and other species, and a foraging analogue of the travelling salesman problem.
Owald,Waddell. Curr. Op. Neurobiol. 2015, 35, 178–184
Cervantes-Sandoval, et al. eLife 2017, 6, e23789
Felsenberg et al. Nature 2015, 544, 240–244
Hige et al. Neuron 2015, 88, 985–998
Perisse et al. Neuron 2015, 79, 945–956
Lihoreau et al. Biol. Lett. 2012, 8, 13–16
Jung Lee, Stefan Mihalas, Luke Campagnola, Stephanie Seeman, Pasha Davoudian, Alex Hoggarth, Tim Jarsky
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Jung Lee (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P253
In the experiments, PSPs were measured while presynaptic neurons were stimulated with spike trains at 10, 20, 50, 100 and 200 Hz, and we fitted the time courses of PSPs to our synapse models. Specifically, we first used x-means clustering to identify homogenous synapses in the class. Then, we fitted the average PSPs from homogeneous synapses to the model using ‘LMFIT’, an open-source package developed for flexible non-linear least-square minimization . So far, we constructed ~ 10 synapse models in V1 that can capture short-term synaptic plasticity observed in V1 (See Fig. 1 for an example). These models suggest 1) that most synapse classes depress in V1 and 2) that short-term synaptic plasticity depend mainly on presynaptic neurons. We believe that these synapse models would allow us to better understand the neural basis of visual perception. The following points should be underscored. First, our synapse models will be further refined, as more data become available. Second, we are currently building network models of V1 with our synapse models to study functions of short-term synaptic plasticity in V1 in visual perception. For instance, we are studying short-term synaptic plasticity’s contribution to the stimulus-specific adaptation in V1.
Abbots LF, Verela JA, Sen K, Nelson SB. Synaptic depression and cortical gain control. Science 1997, 275, 220–224.
Markram H, Wang Y, Tsodyks M. Differential signaling via the same axon of neocortical pyramidal neurons. PNAS 1998, 95(9), 5323–5328.
Stevens CF, Wang Y. Changes in reliability of synaptic function as a mechanism for plasticity. Nature 1994.
Beierlein M, Gibson KR, Connors BW. Two dynamically distinct inhibitory networks in layer 4 of the neocortex. Journal of Neurophysiology 2003, 90(5), 2987–3000.
Gibson JR, Beierlein M, Connors BW. Two networks of electrically coupled inhibitory neurons in neocortex. Nature 1999, 402(6757), 75–79.
Pala A, Petersen CCH. In vivo measurement of cell-type-specific synaptic connectivity and synaptic transmission in layer 2/3 mouse barrel cortex. Neuron 2015, 68–75.
Hennig C, Liao TF. How to find an appropriate clustering for mixed‐type variables with application to socio‐economic stratification. Royal Statistics Society 2013.
Newville M, Stensitzki T, Allen DB, et al. LMFIT: Non-Linear Least-Square Minimization and Curve-Fitting for Python.
Thomas Chartrand1, Mark Goldman2, Timothy Lewis3
1University of California, Davis, Applied Mathematics and Center for Neuroscience, Davis, CA, United States; 2University of California, Davis, Departments of Neurobiology, Physiology and Behavior & Ophthalmology and Vision Science, Davis, CA, United States; 3University of California, Davis, Department of Mathematics, Davis, CA, United States
Correspondence: Thomas Chartrand (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P254
Jonathan Rubin1, Kyle Dunovan2, Catalina Vich3, Matthew Clapp4, Timothy Verstynen2
1University of Pittsburgh, Department of Mathemathics, Pittsburgh, PA, United States; 2Carnegie Mellon University, PA, United States; 3Universitat de les Illes Balears, Spain; 4University of South Carolina, SC, United States
Correspondence: Jonathan Rubin (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P255
Mammals selecting actions in noisy contexts quickly adapt to unexpected outcomes to better resolve uncertainty in future decisions. Such feedback-based changes in behavior rely on plasticity within cortico-basal-ganglia-thalamic (CBGT) networks, driven by dopaminergic (DA) modulation of cortical inputs to the direct (d) and indirect (I) pathways of the striatum. DA error signals favor the D pathway over the I pathway for rewarding actions with the opposite tendency for aversive ones, effectively encoding the values of alternative actions. It remains unclear how changes in action value influence the mechanisms of the action selection process itself. Here we use a biologically plausible spiking model of CBGT networks to illustrate (1) how feedback-driven DA signals modify the strength of D and I pathways in accordance with a simple reinforcement learning model and (2) how asymmetries in D/I efficacy, resulting from the learning process, impact the accumulation of evidence for alternative actions. Simulations of corticostriatal synapses showed that DA feedback leads to asymmetrical weights in the D and I pathways within a given action channel and the ratio of these weights (w_D/w_I) effectively encodes the action’s expected value (Q). We then simulated the full CBGT network in the context of a simple 2-choice value-based decision task under different weighting schemes for cortical inputs to the D and I pathways (high, medium, and low w_D/w_I) for one of the action channels. The simulated response times from these simulations were fit with two variants of a drift–diffusion model (DDM), leaving either the drift-rate or the boundary height free to vary with the w_D/w_I ratio. As w_D/w_I increases, the speed of information accumulation in the decision process also increases, providing a direct mapping between network level properties of CBGT systems and cognitive decision processes. Finally, we have incorporated the corticostriatal plasticity module into the CBGT network model to form an integrated learning and decision-making network. Fits of the DDM to integrated network outputs will provide novel predictions about the mapping between CBGT and DDM parameters—drift-rate, boundary height, accumulation onset time, bias, and others—that best captures RTs associated with variable reward schedules in human experiments performed in our lab. This framework also allows us to explore how particular basal ganglia network features, such as tonic dopamine levels and changes in synaptic connection strengths, relate to changes in decision-making strategies, including those driven by behavioral parameters such as expectation and motivation.
Takashi Hayakawa, Tomoki Fukai
RIKEN Brain Science Institute, Laboratory for Neural Coding and Brain Computing, Wako, Japan
Correspondence: Takashi Hayakawa (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P256
Coherence among activities of individual neurons and local field potential oscillations has been suggested as a clue to the mechanisms underlying information integration in the brain [1–6]. Experiments have also revealed that balanced excitatory and inhibitory synaptic inputs to neurons underlie the local-field potential oscillations . However, despite recent pioneering studies of oscillations in neuronal networks [8–15], how the local field potential oscillations emerge as a result of balanced excitatory and inhibitory inputs and how individual neuronal activities become coherent with those oscillations remain to be understood theoretically. In the present study, we investigate a simple neuronal-network model on a dynamical balance between excitatory and inhibitory recurrent inputs, developing an analytical method that extends a previous theory  and describes this type of networks theoretically for the first time see  for a preprint,. In this network, microscopic dynamics of a small number of neurons are amplified by the strong excitation and inhibition and reflected in the macroscopic dynamics of the mean synaptic input over the network which have been considered as the origin of local field potentials. Conversely, the macroscopic dynamics of the mean synaptic input constrain the microscopic fluctuations in the activities of individual neurons. As a result of these bidirectional interscale interactions, oscillatory patterns of the mean synaptic input similar to local field potential oscillations spontaneously emerge. As the magnitude of balanced excitation and inhibition is increased, the mean synaptic input and the neuronal activities become coherent. This type of coherent states can also be induced by applying external stimuli to a small number of neurons in the network. The above behaviour of the network model is predicted by our theory with good quantitative agreement between the theory and direct simulations. Numerical results further suggest that the coherent states allow selective and reproducible read-out of information from the network. In conclusion, our results suggest a novel form of neuronal information processing that accounts for the emergence of local field potential oscillations, their coherence with neuronal activities, and the role of coherent dynamics in information processing in the brain. We also expect our results to provide a foundation for designing artificial neuronal networks for reservoir computing and beyond.
O’Keefe J, Recce ML. Phase relationship between hippocampal place units and the EEG theta rhythm Hippocampus 1993, 3, 317.
Buzsáki G, Theta oscillations in the hippocampus. Neuron 2002, 33, 325.
Harris KD, et al. Spike train dynamics predicts theta-related phase precession in hippocampal pyramidal cells. Nature 2002, 417, 738.
Fries P, et al. The gamma cycle. Trend. Neurosci. 2007, 30, 309.
Poulet JF, Petersen CC, Internal brain state regulates membrane potential synchrony in barrel cortex of behaving mice. Nature 2008, 454, 881.
Strüber D, et al. Antiphasic 40 Hz oscillatory current stimulation affects bistable motion perception. Brain Topography 2014, 27, 158.
Atallah, Scanziani M, Instantaneous modulation of gamma oscillation frequency by balancing excitation with inhibition. Neuron 2009, 62, 566.
Faugeras O, et al. A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs. Front. Comp. Neurosci. 2008, 3, 1.
Hermann G, Touboul J. Heterogeneous connections induce oscillations in large-scale networks. Phys. Rev. Lett. 2012, 109, 018702.
Cabana T, Touboul J. Large deviation, dynamics and phase transitions in large stochastic and disordered neural networks. J. Stat. Phys. 2013, 153, 211.
Lagzi F, Rotter S. A Markov model for the temporal dynamics of balanced random networks of finite size. Front. Comp. Neurosci. 2014, 8, 1.
Montbrió E, et al. Macroscopic description for networks of spiking neurons. Phys. Rev. X 2015, 5, 021028.
Sancristóbal B, et al. Collective stochastic coherence in recurrent neuronal networks. Nat. Phys. 2016, 12, 881.
García del Molino LC, et al. Synchronization in random balanced networks. Phys. Rev. E 2013, 88, 042824.
Stern M, Abbott L. Dynamics of rate-model networks with seperate excitatory and inhibitory populations. SFN2016.
Kadmon J, Sompolinsky H. Transition to chaos in random neuronal networks. Phys. Rev. X 2015, 5, 041030.
Hayakawa T, Fukai T. Spontaneous and stimulus-induced coherent states in dynamically balanced neuronal networks. arXiv:1711.09621
Espen Hagen1, Gaute Einevoll2, Jan-Eirik W Skaar2, Alexander J Stasik1, Torbjørn V Ness2
1University of Oslo, Department of Physics, Oslo, Norway; 2Norwegian University of Life Sciences, Faculty of Science and Technology, Ås, Norway
Correspondence: Espen Hagen (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P257
Biologically inspired machine learning (“deep learning”) techniques such as convolutional neural networks (CNNs) have shown tremendous power to detect non-trivial features in a wide repertoire of data types. Increased computational power, availability of large labeled data sets, and general purpose and open source software implementations such as Google’s Tensorflow (https://www.tensorflow.org) ensure that the popularity of these techniques is rapidly rising, for example in various image classification tasks . In experimental neurosciences, data with high-dimensional features is routinely collected using a variety of techniques. One such comparably easy-to-perform technique is measurements of extracellular potentials by insertion of electric probes into neural tissue. However, the interpretation of the low frequency part of the signal, the local field potential (LFP), is hard because the measured signals result from both local and remote neural activity. Applications of CNNs for LFP analysis are not yet widespread, in particular with regards to detecting and classifying neural events or states that may not readily be detected using conventional methods. Here, we ask the question: Can CNNs be trained to estimate the underlying model parameters of spiking neuron networks from the LFPs they generate? We apply a recently developed hybrid scheme for computing extracellular potentials from spiking point-neuron network models  to a cortex-like, sparsely connected network model consisting of one excitatory and one inhibitory population of leaky integrate-and-fire (LIF) neurons. The network is simple enough to allow for detailed analysis of its state space . We systematically vary different network parameters (for example connection strengths and amount of external input), run each simulation and compute the corresponding ‘virtual’ LFP signals as if measured at different depths through the neuronal populations. We then train CNNs set up using Tensorflow on subsets of LFP data, and explore to what extent model parameters can be estimated by the CNNs for the remaining LFP data. We indeed find that these CNNs can, based on the generated LFP, accurately identify the model parameters underlying the simulations by this relatively simple spiking network. This work contributes to a better understanding of what information is available in the LFP signal. It is also a first step in the direction of new analysis methods applicable to experimental LFP data that can be used to obtain more detailed information about the underlying neurons and neural networks.
Rawat W, Wang Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput 2017, 29, 2352–2449
Hagen E, Dahmen D, Stavrinou ML, et al. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks. Cereb Cortex 2016, 26, 4461–4496
Brunel N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. J Comput Neurosci 2000, 8, 183–208
P258 Electrical synapses between inhibitory neurons shape the responses of principal neurons to transient inputs in the thalamus: a modeling study
Julie Haas, Tuan Pham
Lehigh University, Dept. of Biological Sciences, Bethlehem, PA, United States
Correspondence: Julie Haas (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P258
P259 An auto-encoder architecture for transcriptomic cell type analysis: 2d mapping of mouse cortical cells
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Uygar Sumbul (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P259
Single cell RNA sequencing (scRNA-seq) can obtain snapshots of the transcriptomic identities of single cells, including neurons. While it has emerged as a high-throughput method of generating cell atlases based on similarities in gene expression profiles, its high-dimensional representations and complicated noise processes create dimensionality reduction challenges for many problems. Here, we present an auto-encoder architecture that improves the quality of the low-dimensional embeddings of scRNA-seq data. We show that the resulting embedding can identify cortical cell types and resolve previously merged classes in a recent deep scRNA-seq dataset of more than 20,000 cells.
Margaret Mahan1, Shivani Venkatesh2, Maxwell Thorpe2, Tessneem Abdallah2, Hannah Casey2, Aliya Ahmadi2, Mark Oswood3, Charles Truwit3, Chad Richardson4, Uzma Samadani2
1University of Minnesota, Biomedical Informatics and Computational Biology, Minneapolis, MN, United States; 2Hennepin County Medical Center, Neurosurgery, Minneapolis, MN, United States; 3Hennepin County Medical Center, Radiology, Minneapolis, MN, United States; 4Hennepin County Medical Center, General Surgery, Minneapolis, MN, United States
Correspondence: Margaret Mahan (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P260
Introduction: Traumatic brain injury (TBI) occurs when an external force results in structural damage to the brain, typically in white matter regions. In cases of mild to moderate TBI, this damage often goes undetected with conventional imaging techniques. However, since the structural damage invokes axon shearing, magnetic resonance imaging (MRI) with the application of network science, may improve detection and ultimately uncover the underlying dysfunction in TBI. Furthermore, research using Diffusion Tensor Imaging (DTI) has shown that diffusion properties, as well as connectivity patterns, can depict properties of TBI networks, namely, decreased fractional anisotropy (FA), increased mean diffusivity (MD), higher small-worldness, higher modularity, and lower global efficiency. While these metrics provide insights into the properties of the structural network, the specific attributes of the network that are disrupted after TBI are still unknown. Here, we aim to further advance the knowledge about the spatial attributes of TBI-related network dysfunction by applying novel network science methods.
Methods: The study enrolled 29 controls and 43 TBI patients who underwent an MRI scan (sagittal T1-weighted volumes and axial diffusion-weighted volumes acquired in 32 directions) within 4 ± 2 days from injury. The Human Connectome Project Multimodal Parcellation, providing 180 regions per hemisphere, was utilized for node definitions in the structural graph with each region corresponding one node. Further resolution of these graphs was achieved at 2-fold, 5-fold, and 10-fold splitting, via k-means with biological constraints, of each region. Edges in the structural graph were represented by streamlines seeded from each white matter voxel in the cerebrum, thresholded by anisotropy and curvature, and calculated using probabilistic Bayesian tractography and deterministic FACT algorithm tractography. Streamlines were retained if two different nodes were connected, the connection included the seed, and was at least 10 mm. The resulting edge definitions for weighting the structural graphs include: streamline counts, mean FA, along with corrections for node volume and streamline length. Adjacency matrices were constructed using the aforementioned node and edge definitions. These matrices were analyzed for graph measures of segregation, integration, and influence with subsequent group analysis via participation coefficient. The final analysis applied spatial machine learning algorithms for assessing network dysfunction.
Results: Previous research has shown sensitivity in results from structural network construction. Here, we comprehensively construct a variety of graphs for each subject and utilize each graph as a valid representation of the structural network. First, the diffusion properties in TBI subjects showed similar patterns for alterations in FA and MD, and specific track related decreases in FA will be presented. Second, graph metrics for segregation, integration, and influence show interesting changes in the acute TBI case, most notably were changes in network efficiency. Next, feature extraction was implemented to find indications of disconnections in the TBI structural network followed by spatial machine learning algorithms to show spatial attributes of these network difference. The results present a novel step towards understanding the structural network dysfunction in acute mild to moderate TBI.
New Jersey Institute of Technology, Department of Mathematical Sciences, Newark, NJ, United States
Correspondence: Victor Matveev (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P261
This work is supported by NSF grant DMS-1517085.
Hake J, Lines GT: Stochastic binding of Ca2+ ions in the dyadic cleft; continuous versus random walk description of diffusion. Biophys J 2008, 94(11):4184–4201.
Modchang C, Nadkarni S, Bartol TM, Triampo W, Sejnowski TJ, Levine H, Rappel WJ: A comparison of deterministic and stochastic simulations of neuronal vesicle release models. Phys Biol 2010, 7(2):026008.
Weinberg SH, Smith GD: Discrete-state stochastic models of calcium-regulated calcium influx and subspace dynamics are not well-approximated by ODEs that neglect concentration fluctuations. Comput Math Methods Med 2012, 2012:897371.
Flegg MB, Rudiger S, Erban R: Diffusive spatio-temporal noise in a first-passage time model for intracellular calcium release. J Chem Phys 2013, 138(15):154103.
Felmy F, Neher E, Schneggenburger R: Probing the intracellular calcium sensitivity of transmitter release during synaptic facilitation. Neuron 2003, 37(5):801–811.
Benjamin Cramer1, David Stöckel1, Johannes Schemmel1, Karlheinz Meier1, Viola Priesemann2,3
1Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany; 2Max Planck Institute for Dynamics & Self-Organization, Göttingen, Germany; 3Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
Correspondence: Benjamin Cramer (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P262
We study the dynamics of spiking neural networks subject to synaptic plasticity driven by causality, emulated on accelerated, analog neuromporphic hardware. By adjusting the coupling to the external input or the degree of recurrence respectively, different dynamical regimes could be observed. For highly recurrent networks, long-tailed avalanche distributions are visible. Further, computationally relevant features develop, quantified by information theory. The applicability of the network for reservoir computing is tested within an auditory setup. By adjusting the coupling to the external input, network features could be selected and adjusted for a desired task.
Rebecca Miko, Christoph Metzner, Volker Steuber
University of Hertfordshire, Biocomputational Rearch Group, Hatfield, United Kingdom
Correspondence: Rebecca Miko (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P263
Naturalistic odour stimuli have a rich temporal structure. It has been hypothesised that this structure contains information about the olfactory scene, for example the distance to an odour source [1, 2]. Furthermore, it has been suggested that animals might exploit this structure and extract this information in order to find odour sources . As some of this information may lie in the frequency content of the stimuli , we studied input frequency dependent responses of mitral cells (MCs) in the olfactory bulb (OB), the first processing stage in the mammalian olfactory system. Specifically, we investigated whether MCs show frequency tuning and, if they do, how different components of the glomerular layer circuitry shape and determine the tuning. We used a model of the OB (modified from ) containing periglomerular cells (PGCs) and MCs, thus focusing on the recurrent and feed-forward inhibition in the glomerular layer. Simple sinusoidal currents of varying strengths and frequencies were used as input to the model. We constructed frequency tuning curves, extracted the peak resonance frequencies and looked at how these changed for different parameter combinations. We also considered the strength of the tuning, measured as (max firing rate − mean firing rate)/mean firing rate. We found that the resonance frequency decreased as the excitation of PGCs (both from the input and from the MCs) increased, whereas the strength of the PGC inhibition onto MCs did not seem to have a strong effect. Furthermore, the resonance strength increased with the strength of the excitatory connection between MCs and PGCs when the PGCs received sufficient external input from olfactory stimuli. These results suggest that the MCs can indeed show frequency tuning and that this depends on the strength of the excitatory synaptic input to PGCs, which provide inhibitory input to the MC. However, the observed frequency tuning occurred in a narrow range (19.5– 33.0 Hz). Future work should investigate how the OB could use this frequency tuning to obtain information about the surrounding olfactory scene.
Celani, A, Villermaux E, Vergassola M. Odor landscapes in turbulent environments. Physical Review X 2014, 4(4), 41015.
Schmuker M, Bahr V, Huerta R. Exploiting plume structure to decode gas source distance using metal-oxide gas sensors. Sensors and Actuators B: Chemical 2016, 235, 636–646.
Jacob V, Monsempès C, Rospars JP, et al. Olfactory coding in the turbulent realm. PLoS Computational Biology 2017, 13(12), p. e1005870.
Li G, Cleland TA. A two-layer biophysical model of cholinergic neuromodulation in olfactory bulb. Journal of Neuroscience 2013, 33(7), 3037–3058.
P264 The combined effect of homeostatic structural and inhibitory synaptic plasticity during the repair of balanced networks following deafferentation
Ankur Sinha, Christoph Metzner, Rod Adams, Neil Davey, Michael Schmuker, Volker Steuber
University of Hertfordshire, Biocomputational Rearch Group, Hatfield, United Kingdom
Correspondence: Ankur Sinha (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P264
Although a number of previous experimental and theoretical studies have investigated network reorganisation following deafferentation down to the level of synaptic elements , the mechanisms that are involved in this process are still not completely understood. We examined the dynamics of the repair mechanism by incorporating activity dependent homeostatic structural plasticity  into a spiking neural network model balanced by inhibitory synaptic plasticity . Results from our simulations suggest that the process of reconfiguration of lateral connectivity following sensory deprivation is extremely sensitive to the balance of excitation and inhibition (E-I) in the network. We find that while fast homeostatic inhibitory synaptic plasticity is able to re-establish the E-I balance in neurons outside the lesion projection zone (LPZ), it prevents them from transferring excitatory activity to the deprived neurons in the LPZ. On the other hand, uncontrolled disinhibition by suppression of homeostatic inhibitory synaptic plasticity initially allows deprived neurons to regain activity but fails to stabilise the network back to a functional balanced state. These observations are in accordance with findings that indicate that inhibition plays a critical role in network rewiring  seemingly by stimulating structural plasticity mechanisms seen during development . The sprouting of inhibitory axons outwards from the LPZ, opposite to excitatory axons has also been observed, possibly to re-inhibit neurons outside the LPZ . Therefore, we hypothesise that the ratio of excitation and inhibition must follow a specific trajectory in the different regions of the network to enable successful repair as has been observed in various studies. The model of structural plasticity implements the dynamics of synaptic elements as dependent on intrinsic properties of individual neurons only . The configuration of the network, by the formation and removal of synapses therefore depends solely on the numbers of various synaptic elements. Our current work extends this model by considering other factors that affect network rewiring, such as the activity dependent stability of synapses, and inhibition gradient guided axonal sprouting , to build a more faithful simulation of the underlying dynamics. This will enable us to study the effects of network reorganisation after deprivation on its computational functions, such as associative memory.
Butz M, van Ooyen A. A Simple Rule for Dendritic Spine and Axonal Bouton Formation Can Account for Cortical Reorganization after Focal Retinal Lesions. PLoS Comput Biol 2013, 9(10), e1003259.
Chen JL, et al. Structural basis for the role of inhibition in facilitating adult brain plasticity. Nature Neuroscience 2011, 14(5).
Marik SA, et al. Large-scale axonal reorganization of inhibitory neurons following retinal lesions. Journal of Neuroscience 2014, 34(5).
Sammons RP, Keck T. Adult plasticity and cortical reorganization after peripheral lesions. Current Opinion in Neurobiology 2015, 35.
Vetencourt JFM, et al. The antidepressant fluoxetine restores plasticity in the adult visual cortex. Science 2008, 320, 5874.
Vogels TP, et al. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 2011, 334, 6062.
Christoph Metzner1, Bartosz Zurowski2, Volker Steuber1
1University of Hertfordshire, Biocomputational Rearch Group, Hatfield, United Kingdom; 2University of Lübeck, Center for Integrative Psychiatry, Lübeck, Germany
Correspondence: Christoph Metzner (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P265
Since synchronized neuronal activity might underlie efficient communication in the brain, alterations thereof, as found in EEG/MEG studies of schizophrenic patients, might contribute to the symptoms characterizing schizophrenia . A robust finding is a deficit in the gamma band auditory steady-state response (ASSR) . Fast-spiking PV+ interneurons seem to be a major contributor to gamma oscillations. However, this class of inhibitory interneurons can be divided into at least two subgroups: basket cells (BCs) and chandelier cells (ChCs) . Interestingly, for both subtypes cellular/molecular alterations have been identified in schizophrenia . However, the role these two subgroups play during the generation of gamma oscillations, and during abnormal oscillations in schizophrenia, remains unresolved. We use a simple model, consisting of three populations of theta neurons: (1) pyramidal cells (PCs), (2) BCs and (3) ChCs (based on [3, 4]). We assume that the prolonged GABAergic decay time at ChC synapses is a major contributor to gamma and beta band ASSR deficits in schizophrenia  and model this by increasing the decay time constant for ChCs. We then explore the model behaviour in response to oscillatory inputs in the beta and gamma range, for different ratios of BCs vs. ChCs (BCs are known to be more numerous than ChCs ), different strengths of inhibition of the ChCs onto PCs (ChCs might exert powerful inhibition because of their synapses directly targeting the axon hillock of PCs ) and reductions in the strength of inhibition of BCs (a possible result of genetic alterations in schizophrenia ). At realistic BCs/ChCs ratios, increased ChC inhibition, due to increased decay times is not sufficient to strongly reduce gamma power as it has been described for schizophrenia patients. Under the assumption that they exert much more powerful control over PC firing stronger reductions were observed. However, the model did not reproduce other deficits that have been described in schizophrenia, such as an increase in beta power for 20 and 40 Hz stimulation . Simultaneously reducing BC inhibition did not change this overall behaviour. Interestingly, prolonged decay times at BC-PC synapses led to both a strong decrease of gamma power and an increase in beta power, matching experiments more closely. We conclude that changes in the dynamics at ChC-PC synapses might not be a major contributor to gamma and beta band ASSR deficits in schizophrenia. Our results suggest that the more numerous BCs are likely to dominate the influence inhibitory interneurons exert on the PC population during oscillatory entrainment.
Gonzalez-Burgos G, Lewis D.A. NMDA receptor hypofunction, parvalbumin-positive neurons, and cortical gamma oscillations in schizophrenia. Schizophrenia bulletin 2012, 38(5), pp. 950–957.
Thune H, Recasens M, Uhlhaas PJ. The 40-Hz auditory steady-state response in patients with schizophrenia: a meta-analysis. JAMA psychiatry 2016, 73(11), pp. 1145–1153.
Vierling-Claassen D, Siekmeier P, Stufflebeam S, Kopell N. Modeling GABA alterations in schizophrenia: a link between impaired inhibition and altered gamma and beta range auditory entrainment. Journal of Neurophysiology 2008, 99(5), pp. 2656–2671.
Metzner C. Modeling GABA alterations in schizophrenia: a link between impaired inhibition and altered gamma and beta range auditory entrainment. ReScience 2017, 3(1).
Adree Songco Aguas1, Fred Rieke1, William Grimes2
1University of Washington, Departments of Physiology & Biophysics, Seattle, WA, United States; 2National Institutes of Health, Neuroscience Department, Bethesda, MD, United States
Correspondence: Adree Songco Aguas (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P266
Parallel processing underlies computation in many neural circuits. Several common circuit motifs control how parallel processing contributes to circuit function: (1) divergence of common inputs to parallel circuits; (2) distinct linear shaping of signals in different parallel circuits; and (3) location of key circuit nonlinearities relative to the convergence points of signals from different parallel circuits. Interactions between rod and cone mediated signals in the retina provide an excellent opportunity to investigate these computational elements. Vision relies on inputs from both rod and cone photoreceptors across light conditions ranging from moonlight to dawn, and visual perception is strongly influenced by interactions between the resulting signals. To understand how retinal mechanisms contribute to these perceptual interactions, we aim to develop a model that predicts retinal output in response to temporally and spatially modulated images in dim and intermediate light. We will use direct retinal recordings from cells across the primate retina to constrain the architecture of our model and test its ability to capture key features of rod-cone interactions. The model will then be used to predict neural responses to novel stimuli, specifically focusing on identifying stimuli that highlight the importance of specific circuit features in shaping retinal outputs; significant discrepancies between predictions and empirical measurements will be utilized in finetuning the model. Ultimately, this model will improve both our understanding of how perceptually-relevant computation operates in parallel circuits and our ability to incorporate relevant computational features into devices (e.g. retinal prosthetics) that aim to replicate retinal function.
Donald Doherty1, Subhashini Sivagnanam2, Salvador Dura-Bernal3, William W Lytton3
1SUNY Downstate Medical Center, Department of Anesthiology, Pittsburgh, PA, United States; 2University of California, San Diego, San Diego Supercomputer Center, La Jolla, CA, United States; 3SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States
Correspondence: Donald Doherty (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P267
Avalanches have been suggested to reflect a scale-free organization of cortex. It is hypothesized that such an organization may relate to a particularly effective form of activity propagation which is balanced between failure (activity fails to reach a target area) and overactivation (activity reaches a target area via many routes leading to wasted activity or epileptiform activity patterns). We electrically stimulated a computer model of mouse primary motor cortex (M1) and analyzed signal flow over space and time. Initially we stimulated a 300 μm × 600 μm slice of M1 using a 10 μm × 10 μm 0.5 nA stimulus across all 6 layers of cortex (1350 μm) for 100 ms. Waves of activity swept across the cortex for a half a second after the end of the electrical stimulus. We extracted avalanches from the data by counting events, spikes, occurring within 1 ms frames. An avalanche of length N was defined as N consecutively active frames, preceded by a blank frame and followed by a blank frame. A graph of the cortical slice above, with the 0.5 nA stimulus, displayed a bimodal distribution. We observed 18 avalanches in total with 4 single neuron avalanches and all the other avalanches containing more than 1000 neurons each. The largest avalanche contained 7000 neurons. Studies have generally shown avalanche activity to show a linear log–log graph starting highest from small avalanches and decreasing as the avalanches get larger. We looked at responses of M1 to lower amplitude stimuli between 0.05 and 0.5 nA to see if they may fit a classic inverse power-law curve. We graphed M1 response to a 500 ms electric stimulus at various amplitudes and found particularly clear inverse power-law responses to stimuli between 0.16 and 0.18 nA. In the 300 μm × 300 μm slice of M1 for 500 ms using 0.16nA we observed 90 avalanches from as small as a single neuron action potential in isolation to 13 neurons spiking. A large proportion were SOM neurons participating in the avalanches but they also included IT neurons at this level of stimulation. Neurons from every layer of cortex participated in avalanches except for layer 4. At stimulus onset neurons within an avalanche spiked at the same time. Spike onset amongst neurons within an avalanche became more heterogeneous as time progressed, especially after about 400 ms. For example, a 5 neuron avalanche began 431 ms after stimulus onset with a SOM6 neuron spike (x:84.2 μm, z:98.9 μm). Eight-tenths of a millisecond later it was followed with an IT5A spike (x:92.8 μm, z:85.7 μm). Next, after 0.65 ms, a different SOM6 neuron spiked (x:79.1 μm, z:83.2 μm) and finally the avalanche ended with yet another SOM6 spike (x:81.0 μm, z:64.3 μm). We observed similar results using a 0.18nA stimulus that elicited 110 avalanches from single neuron avalanches to avalanches that included 12 neurons. The simulation of avalanches in cortex offers advantages for analysis that are not readily done experimentally in in vivo or in vitro. We have been able to record from every neuron in our M1 slice and follow activity from cell to cell. In the future we will analyze how avalanches take place within and between layers.
Supported by NIH U01EB017695.
P268 NetPyNE: a high-level interface to NEURON to facilitate the development, parallel simulation and analysis of data-driven multiscale network models
Salvador Dura-Bernal1, Padraig Gleeson2, Samuel Neymotin1, Benjamin A Suter3, Adrian Quintana4, Matteo Cantarelli5, Michael Hines6, Gordon Shepherd7, William W Lytton1
1SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States; 2University College London, Dept. of Neuroscience, Physiology & Pharmacology, London, United Kingdom; 3Institute of Science and Technology (IST), Austria; 4EyeSeeTea Ltd, United Kingdom; 5Metacell LLC, CA, United States; 6Yale University, Department of Neuroscience, CT, United States; 7Northwestern University, Department of Physiology, IL, United States
Correspondence: Salvador Dura-Bernal (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P268
Research supported by NIH Grant U01EB017695, DOH01-C32250GG-3450000, NIH R01EB022903 and NIH R01MH086638.
Lytton WW, Seidenstein A, Dura-Bernal S, Schurmann F, McDougal RA, Hines ML. Simulation neurotechnologies for advancing brain research: Parallelizing large networks in NEURON. Neural Comput. 2016
Dura-Bernal S, Neymotin SA, Suter BA, Shepherd GMG, Lytton WW. Long-range inputs and H-current regulate different modes of operation in a multiscale model of mouse M1 microcircuits. bioRxiv. 2017 07 [Preprint]; https://doi.org/10.1101/201707
Adam J. H. Newton1, Alexandra H. Seidenstein2, Robert A. McDougal1, Michael Hines1, William W Lytton2
1Yale University, Department of Neuroscience, New Haven, CT, United States; 2SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States
Correspondence: Adam J. H. Newton (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P269
Research supported by NIH Grant R01MH086638.
McDougal RA, Hines ML, Lytton WW. Reaction–diffusion in the NEURON simulator. Frontiers in Neuroinformatics 2013 7(28).
Newton AJH, McDougal RA, Hines ML, Lytton WW. Using NEURON to promote reproducibility in reaction–diffusion modeling of extracellular dynamics. Frontiers in Neuroinformatics. (in press).
Newton, AJH, and Lytton, WW. Computer modeling of ischemic stroke. Drug Discovery Today: Disease Models 2017.
Robert A. McDougal1, Adam J. H. Newton1, William W Lytton2
1Yale University, Department of Neuroscience, New Haven, CT, United States; 2SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, NY, United States
Correspondence: Robert A. McDougal (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P270
The NEURON simulator (neuron.yale.edu) provides a computational framework for studying not only networks of neurons but also the interplay between electrophysiology and chemical dynamics, (both intracellular and extracellular reaction–diffusion models). The models underlying these studies can be specified, simulated, and analyzed using both Python and graphical tools. NEURON’s graphical tools previously focused on supporting pure electrophysiology models. We describe a new integrated graphical toolset, powered by wxPython 4.x, for specifying and visualizing NEURON models incorporating both reaction–diffusion dynamics and traditional electrophysiology simulation. In comparison to electrophysiology models, these models feature new types of regions (1D and 3D, intracellular organelles, extracellular space, etc.), new types of kinetics, etc. Our toolset includes an expanded RxDBuilder supporting recent enhancements to NEURON’s reaction–diffusion capabilities, including extracellular and 3D intracellular simulations. The intracellular 3D graphical tools provide a detailed view of the cells morphology, enabling the modeler to select a region of interest over which to plot relevant intracellular concentrations. With the extracellular space, the GUI allows the modeler to choose to view the concentration dynamics for: a single voxel in the extracellular space, an average around the cell of section of interest, or over the whole extracellular space. To allow model changes from both the console and the GUI, the graphical tools are run in a separate thread that periodically polls the internal state; a function is provided to allow arbitrary wxPython windows to be run in the same thread, allowing user customization. For performance reasons, state variables are recorded in C++ during simulations; visualization occurs via Python at a user-specifiable interval. A session consisting of; the models, their current state and the graphical tools may be saved and loaded for future reuse. We demonstrate the utility of these model construction and visualization tools with a 3D intracellular calcium wave model and an extracellular model of spreading depression.
Research supported by NIH MH 086638.
Dmitrii Todorov, Wilson Truccolo
Brown University, Department of Neuroscience, Providence, RI, United States
Correspondence: Dmitrii Todorov (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P271
In recent years, nonlinear Hawkes processes implemented as point process Generalized Linear Models (GLMs) have been proven to be a useful tool for analyzing microelectode array recordings. On a more theoretical side they are related to well known models of neuronal dynamics such as the spike response model and can capture the spiking temporal patterns of Izhikevich canonical models. Unlike ODE-based neuron models, point process GLMs can be fitted directly to the spike times data. For most relevant models, the models can be easily fitted using standard optimization tools, as the likelihood function is strictly convex. Despite the acknowledged utility of nonlinear Hawkes process GLMs, the dynamics of fitted models has attracted attention only recently. In particular, simulation of fitted models can often produce unphysiologically high firing rates, despite passing many goodness-of-fit tests. Here, “unphysiologically high rate” means “rate, close to 1/absolute refractory period,” reflecting “runaway excitation”. To make nonlinear Hawkes process GLMs useful for long-term prediction of neuronal activity and simulation studies, it is important to understand which model features can lead to runway excitation. The mathematical theory of nonlinear Hawkes processes is not fully developed. Prior studies (e.g. Bremaud and Massoulie, Ann Prob, 1996) have focused either on the mere existence of finite stationary firing rates (and do not allow to estimate their actual values in general), or on the theoretical examination of infinite neuronal networks with some degree of homogeneity in the connectivity, whereas actual recordings typically contain not more than several hundreds of neurons and lack homogeneity. The question we consider here is how to predict runway excitation for an arbitrary finite network of Hawkes processes. Several recent theoretical approaches, based on statistical physics, allow to approximate stationary firing rates for nonlinear Hawkes processes. Those include mean field approximations, 1-loop approximation (Ocker et al., PLoS CB, 2017), quasi-renewal (QR) approximation (Gerhard et al., PLoS CB, 2017) and the regular firing rate test. These approaches are quite different conceptually, were introduced in different settings and have limitations in different directions. E.g. mean field approximation does not work for neurons with absolute refractory period without additional adjustments; 1-loop approximation inherits the same issue but also shows poor accuracy for strong nonlinear functions (at least for some networks), whereas QR approximation is primarily designed to work for exponential nonlinearity only. Moreover, The QR approximation can lead to prediction of multiple “fixed points” (stationary firing rates) that may relate to the actual dynamics in a nontrivial manner. To summarize, so far the strengths and limitations of these different approaches have not been compared systematically. Also their application to real data has been limited. We present a study that compares how the above approaches work for simple single and multiple-neuron nonlinear Hawkes process GLMs and compare their predictions with simulations. We identify model features that make some approaches work much better than others. We show that, in some cases, the different approaches can complement each other. Finally we demonstrate how the different approaches work when being applied to multivariate nonlinear Hawkes process GLMs fitted to actual spiking data.
Timothée Proix1, Mehdi Aghagolzadeh1, Leigh R. Hochberg2, Sydney Cash3, Wilson Truccolo4
1Brown University, Department of Neuroscience & Institute for Brain Science, Providence, RI, United States; 2Brown University, U.S. Department of Veterans Affairs and Institute for Brain Science, Providence, RI, United States; 3Massachusetts General Hospital, MA, United States; 4Brown University, Department of Neuroscience, Providence, RI, United States
Correspondence: Timothée Proix (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P272
Methods to reliably predict seizures in patients with epilepsy have been sought for several decades. Reliable seizure prediction would have a major impact in the quality of life of people with pharmacologically intractable seizures, allowing for new seizure prevention therapies based on warning and closed-loop electrical stimulation systems . While most seizure prediction systems have relied upon EEG and/or ECoG, the predictive value of intracortical neural signals remain little explored . Here, we demonstrate that seizures can be predicted early in advance from the neural activity of small neocortical patches distal from the identified seizure onset areas. We used multiunit activity and local field potentials recorded via microelectrode arrays (Blackrock Microsystems, Salt Lake City, Utah) plus machine learning algorithms to show that interictal and preictal activity in people with focal seizures can be discriminated. Intracortical signals were recorded in 5 patients undergoing neuromonitoring for resective surgery from a neocortical area distal to identified seizure onset areas . Preictal periods were defined as the one-hour period leading to a seizure with a 5-minute interval between the preictal period and the seizure onset time. Interictal periods excluded the four hours preceding any seizures . This setting attenuates potential errors and uncertainty in the determination of actual seizure onset times and the separation or interictal and preictal periods. Long short-term memory (LSTM) recurrent neural networks were used to assess the predictive power of the different features extracted from the recorded neural activity signals. Substantial predicted power, as assessed by the area under the receiver operating characteristic curves, was achieved with a 90% score for at least one type of feature in each patient. Importantly, we show that successful prediction can be achieved based exclusively on the multiunit activity of recorded neurons detected by thresholding high-pass filtering the electric potentials. This result indicates that neural activity in the recorded local neocortical patch exhibited preictal changes not only in subthreshold postsynaptic potentials that could be driven by the distal epileptogenic areas, but also changes in the local neuronal spiking activity in the recurrent neocortical networks. Our findings indicate that large-scale neuronal networks are engaged beyond the identified epileptogenic seizure onset areas towards the onset of a seizure, and open new perspectives for seizure prediction and control by emphasizing the contribution of multiscale neural signals in these networks.
Cook MJ, O’Brien TJ, Berkovic SF, et al. Prediction of seizure likelihood with a long-term, implanted seizure advisory system in patients with drug-resistant epilepsy: a first-in-man study. The Lancet Neurology 2013, 12, 563–571.
Brinkmann BH, Wagenaar J, Abbot D, et al. Crowdsourcing reproducible seizure forecasting in human and canine epilepsy. Brain 2016, 139, 1713–1722.
Truccolo W, Donoghue JA, Hochberg LR, et al. Single-neuron dynamics in human focal epilepsy. Nature Neuroscience 2011, 14, 635–641.
Rosangela Follmann1, Epaminondas Rosa1, Wolfgang Stein2
1Illinois State University, School of Information Technology, Normal, IL, United States; 2Illinois State University, School of Biological Sciences, Normal, IL, United States
Correspondence: Rosangela Follmann (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P273
Long-range communication in the nervous system is carried out with the propagation of action potentials along the axons of nerve cells. While typically thought of as being unidirectional, it is not uncommon for axonal propagation of action potentials to happen in both directions. This is the case because action potentials can be initiated at multiple ‘ectopic’ positions along the axon. Axons are endowed with ionotropic and metabotropic receptors for transmitters and neuromodulators that can alter membrane excitability, and initiate ectopic action potentials . Action potentials generated at distinct sites, and traveling toward each other, will collide. Recently, it has been suggested that some biological axons may show properties of crossing action potentials , and that Hodgkin-Huxley type models may be inadequate for representing some axons. However, this view has been challenged in a subsequent study using a reduced Hodgkin-Huxley model . As neuronal information is encoded in the frequency of action potentials, the rate of action potential collision and annihilation may affect the way in which neuronal information is received, processed and transmitted. Additionally, action potential collision and annihilation can be of relevance in the treatment of spinal cord injury along with chronic pain of peripheral origin . Here we present numerical simulations and experimental results aimed at helping to elucidate the subject of colliding action potentials . We introduce an axonal multicompartmental model with the compartments represented by Hodgkin-Huxley equations reciprocally connected to each other by diffusive coupling. The numerical simulations are capable of mimicking low frequency ectopic spiking with orthodromic and antidromic action potential propagation. They predict that colliding action potentials traveling in opposite directions annihilate and do not cross. We further discuss this matter in the context of axonal excitability and supernormality in the wake of action potential generation for neurons of type I and type II. We also present results of experimental work performed on the earthworm ventral cord and on the crustacean stomatogastric nervous system. Both numerical simulations and experimental outputs clearly and unambiguously demonstrate that annihilation is inevitable.
D. Bucher&J.-M. Goaillard, Prog. Neurobiol. 94, 307 (2011).
A. Gonzalez-Perez et all. Phys. Rev. X, 4(3):031047, 2014.
S. R. Meier, PloS one, 10(3):e0122401, 2015.
X. Zhang, et all., IEEE Trans. Biomed. Eng. 53, 2445 (2006).
R. Follmann, E. Rosa Jr,&W. Stein. Phys Rev E 92 (3) 032707 (2015).
Chiara Gastaldi, Samuel Muscinelli, Wulfram Gerstner
École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland
Correspondence: Chiara Gastaldi (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P274
Consolidation of synaptic changes in response to neural activity is thought to be fundamental for memory maintenance over a time scale of hours. In experiments, synaptic consolidation can be induced by repeatedly stimulating presynaptic neurons. However, the effectiveness of such protocols depends crucially on the repetition frequency of such stimulations and the mechanisms that cause this complex dependence are unknown.
Samuel Muscinelli1, Tilo Schwalger2, Wulfram Gerstner3
1École Polytechnique Fédérale de Lausanne, School of Life Sciences, Lausanne, Switzerland; 2École Polytechnique Fédérale de Lausanne, Laboratory of Computational Neuroscience., Lausanne, Switzerland; 3École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland
Correspondence: Samuel Muscinelli (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P275
Biological neurons exhibit an extended richness of biophysical mechanisms besides passive input integration. Among these, spike frequency adaptation has received great interest due to its richness in time scales  that allows sensory neurons to optimally transmit information . Understanding the effect of such history-dependent processes on the dynamics of a recurrent neural networks has proven to be a hard task. Recent developments in mean-field approaches in the presence of such history-dependent processes [3, 4] allow to compute the mean and fluctuations of the network activity. To obtain the temporal structure of the recurrently generated fluctuations however, one has to solve the system self-consistently for the fluctuations. This can be done, in the large-N limit of non-adaptive, randomly connected networks of rate units, using Dynamical Mean-Field Theory (DMFT), thanks to which it was first shown the existence of a quiescent phase and a chaotic phase in the network dynamics . However, this technique was until now restricted to non-adaptive networks. Here we apply DMFT to a randomly connected network of adaptive rate neurons. The technical challenge emerging from this setting is that the resulting mean-field system is two-dimensional, causing standard DMFT techniques to be not applicable. We propose an iterative method that allows fast computation of the mean power spectral density of the network activity. We show that in a large portion of the adaptation parameter space, the dynamics of the adaptive neural network is qualitatively different from the non-adaptive one. Besides the purely chaotic and purely quiescent phases, the adaptive network features two new phases. For strong recurrent connectivity and strong adaptation, the chaotic dynamics exhibit a finite-width peaked power spectral density, which means that in the DMFT limit the system can be described as a stochastic oscillation. For lower connection strength, a bistable phase also emerges, in which a stable fixed point coexists with limit cycles, even in the large-N limit. Finally, we extend the well-known result for the eigenvalue spectrum of Gaussian random matrices , to the adaptive case. This allows us to compute the stability of the zero fixed-point, and we show that it approximately predicts the separation between pure chaos and stochastic oscillations and the oscillation frequency at the criticality.
The additional dynamical richness of adaptive neural networks could explain the better performance in learning tasks that require integration of information over long time scales . This could highlight a novel important role of adaptation that arise through network-level interactions.
Pozzorini C, Naud R, Mensi S, Gerstner W. Temporal whitening by power-law adaptation in neocortical neurons. Nature neuroscience 2013, 16(7), 942.
Fairhall AL, Lewen GD, Bialek W, van Steveninck RRDR. Efficiency and ambiguity in an adaptive neural code. Nature 2011, 412(6849), 787.
Deger M, Schwalger T, Naud R, Gerstner W. Fluctuations and information filtering in coupled populations of spiking neurons with adaptation. Physical Review E 2014, 90(6), 062704.
Schwalger T, Deger M, Gerstner W. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS computational biology 2017, 13(4), e1005507.
Sompolinsky H, Crisanti A, Sommers HJ. Chaos in random neural networks. Physical review letters 1988, 61(3), 259.
Girko VL. Circular law. Theory of Probability & Its Applications 1985, 29(4), 694–706.
Muscinelli SP Gerstner W. Long timescale sequence recognition using adaptive neural networks. Conference on Cognitive Computational Neuroscience 2017.
Saba Entezari1, Pamela M Baker2, Wyeth Bair2
1University of Washington, Mechanical Engineering, Seattle, WA, United States; 2University of Washington, Biological Structure, Seattle, WA, United States
Correspondence: Wyeth Bair (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P276
Introduction: Adaptation is a ubiquitous property of cortical neurons, but how adaptation alters the encoding of sensory inputs across multiple stages of processing is not well understood. The pathway from V1 to MT is ideal for understanding how adaptation at one stage (V1) influences the encoding in downstream neurons because past work has characterized changes in selectivity in V1 and MT as a result of a diverse set of adaptation paradigms and circuit-level models for this pathway have been proposed. Nevertheless, there is currently no image-computable model of MT responses that includes adaptation. Thus, we developed such a model to better understand at the single-unit and circuit level how the encoding of visual motion and the emergence of pattern direction selectivity is altered by adaptation across cortical stages.
Methods: We added several mechanisms of adaptation to our image-computable model of MT component direction selective (CDS) and pattern direction selective (PDS) neurons (Baker&Bair, 2016, J Neurosci; Baker&Bair, ModVis 2017). The model includes spatial integration from V1 to MT, V1 iso-orientation surround suppression (IOSS) and normalization stages in V1 and MT. First, we implemented single-stage gain adaptation on the raw motion energy signals in twelve direction channels at each spatial location. The adapted signals are then used to compute the surround suppression signal and a spatially local classical untuned normalization signal. The normalized signals pass through a V1 opponency stage before being normalized and integrated (across space&direction) at the MT stage to form CDS and PDS units. Second, we implemented a recently proposed form of adaptation in which normalization weights between units are updated by a learning rule that aims to achieve pairwise response-product homeostasis (RPH; Westrick et al., 2016, J Neurosci), extending this mechanism from orientation to direction channels.
Results: For single-stage gain adaptation, we found that the effect on V1 tuning of prolonged adaptation to a single direction depended on the size of the adapting and test stimuli (drifting grating patches optimized for the unit under study) in a manner qualitatively consistent with electrophysiological results in terms of response amplitude (Patterson et al., 2013, J Neurosci). However, tuning curves showed attractive shifts in the presence of untuned normalization when flank adaptation was limited to the classical receptive field (CRF), unlike repulsive shifts reported in the literature. When untuned normalization was omitted, there were no attractive shifts and no shifts in tuning for MT CDS cells, but there were repulsive shifts for PDS cells, contrary to the literature. For RPH normalization, we were able to achieve the desired repulsive shifts in V1 direction tuning for flank adaptation, but direction channels not driven by the adapter showed implausibly large increases in gain.
Conclusions: Our results so far suggest that simple combinations of mechanisms believed to be fundamental to processing along the V1-to-MT pathway are not sufficient to account simultaneously for the influences of adaptation across a diverse set of stimulus configurations in the CRF and surround. To remedy this, we are implementing two-stage gain adaptation and exploring alterations to RPH normalization that can better account for physiological data.
We thank Adam Kohn for advice. Funding: NIH R01 EY027023-01.
Baker PM, Bair W. A Model of Binocular Motion Integration in MT Neurons. Journal of Neuroscience 2016, 36(24):6563–6582
Baker P.M., Bair W. Unifying Binocular, Spatial, and Spatiotemporal Frequency Integration in Models of MT Neurons. Computational and Mathematical Models in Vision (MODVIS) workshop 2017; St Pete Beach (FL).
Westrick ZM, Heeger DJ, Landy MS. Pattern Adaptation and Normalization Reweighting. Journal of Neuroscience 2016. 36 (38) 9805–9816.
Patterson CA, Wissig CW, Kohn A. Distinct effects of brief and prolonged adaptation on orientation tuning in primary visual cortex. Journal of Neuroscience 2013. 33 (2) 532–543.
Ang Li1, Si Wu1, Ye Li2, Xiaohui Zhang1
1Beijing Normal University, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing, China; 2Zhejiang University, Interdisciplinary Institute of Neuroscience and Technology, Hangzhou, China
Correspondence: Ang Li (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P277
Robert Capps1, Taegyo Kim2, Khaldoun Hamade2, Sergey Markin2, Dmitrii Todorov3, William Barnett1, Elizaveta Latash1, Yaroslav Molkov1
1Georgia State University, Department of Mathematics & Statistics, Atlanta, GA, United States; 2Drexel University College of Medicine, Philadelphia, PA, United States; 3Brown University, Department of Neuroscience, Providence, RI, United States
Correspondence: Yaroslav Molkov (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P278
The striatum is a structure of the basal ganglia that is critical for reinforcement learning. In the striatum, cholinergic tonically active neurons (TANs) are thought to gate the dopaminergic input to medium spiny neurons during their involvement in action selection and reinforcement. TANs exhibit a context-dependent pause in their activity, during which the dopamine (DA) concentration in the striatum varies to encode reward prediction error (RPE), i.e. the difference between the expected and obtained reward. Although this mechanism has been the subject of many experimental studies, the role of TANs in motor learning is not well understood.
However, it is known that TANs generate a short burst in response to a stimulus, which is followed by a pause in TAN activity for several hundred milliseconds. During the pause, dopaminergic neurons modulate DA release into the striatum to encode the RPE and thus induce learning. After this pause, TANs return to normal tonic firing and striatal dopamine concentration stabilizes at its baseline levels. The duration of the TAN pause depends on dopaminergic inputs to TANs through activation of D2 receptors. During baseline tonic firing, TANs—being cholinergic—control the output of dopaminergic neurons by releasing acetylcholine (ACh) that binds to the nicotinic receptors of the latter. When a reward is presented, TANs receive a short stimulus from the thalamus. This short increase in TAN activity is then followed by inhibition via a slow after-hyperpolarization (sAHP) current, which lasts several seconds, inducing a pause in the tonic firing of TANs. Another current, the hyperpolarization-activated cation current (h-current) allows quick recovery from sAHP. The h-current in these neurons is down-regulated by DA via D2 receptors. Therefore, the TAN pause is produced by the slow AHP current, and the length of the pause is modulated by the faster h-current. Thus, encoding the RPE depends on the dynamic interactions between DA and ACh release mechanisms in the striatum. In this study, we constructed a mathematical model of ACh-DA interactions to clarify TANs’ role in reinforcement learning. We fit our model to the data obtained in electrophysiological experiments. Furthermore, we integrated the model ACh-DA interactions into our previously published model of the reward-based motor learning during center-out reaching movements. We simulated the effects of the striatal dopamine deficiency as observed in Parkinson’s disease patients. Additionally, we simulated and mechanistically explained the effects of administration of L-DOPA- a common treatment for early phases of Parkinson’s Disease- clarifying the mechanism by which L-DOPA recovers learning in these patients. In simulations, our model shows that both the baseline DA concentration and phasic DA release positively correlate with the duration of the TAN pause. Therefore, in the case of striatal DA deficiency, the loss of learning is associated not only with lower DA concentration but also with a shorter TAN pause, which means there is a shorter time period for learning to occur. We simulated L-DOPA administration by increasing the baseline concentration of DA in the striatum, which did allow partial recovery of motor learning functionality even though the magnitude of phasic DA release was not affected. Our model explains this recovery by L-DOPA-mediated prolongation of the TAN pause, which increases learning efficiency.
Elizaveta Latash, Robert Capps, William Barnett, Yaroslav Molkov
Georgia State University, Department of Mathematics & Statistics, Atlanta, GA, United States
Correspondence: Yaroslav Molkov (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P279
The respiratory and cardiovascular systems work together to oxygenate tissues and remove carbon dioxide and are physiologically integrated. Central neural circuits that control the respiratory and cardiovascular functions are located in brainstem and receive sensory feedback to maintain the gas homeostasis. Respiratory and cardiovascular physiologic outputs are partially synchronized/modulated by each other, and the respective brainstem neuronal networks have reciprocal synaptic connections. However, no quantitative mechanistic description was suggested to explain specific aspects of the cardio-respiratory interactions and their alterations in certain pathophysiological conditions. Two major markers of cardio-respiratory interactions were previously identified: cardio-ventillatory coupling (CVC) and respiratory sinus arrhythmia (RSA). CVC is usually interpreted as a form of partial synchronization between cardiac and respiratory rhythm which is characterized by varying probability of a heartbeat to occur at different phases of the respiratory cycle. RSA is a phenomenon concerned with changes in heartrate at different respiratory phases, which is usually represented by the dependence of the inter-heartbeat interval (R–R interval) on the respiratory phase with R–R interval shortened during inspiration and prolonged during expiration. Due to similar representation, CVC and RSA are often confused. However, there is substantial experimental evidence that independent mechanisms mediate the two phenomena. Here, we introduce a closed loop model of the integrated respiratory and cardiovascular control system to describe mechanisms for both CVC and RSA. The model combines and extends our previous data-driven models that incorporated mechanisms of cardiovascular input to the respiratory system or respiratory input to the cardiovascular system. In this model, CVC is mediated by the pulsatile inputs from arterial baroreceptors to neurons of the respiratory central pattern generator (rCPG) with pulses corresponding to the increases and subsequent relaxations in arterial pressure caused by heart contractions. We implement baroreceptor input to the rCPG as excitatory projections from 2nd order baro-sensitive neurons of the nucleus of solitary tract (NTS) to the expiratory population of the rCPG. This makes the onset of inspiration less likely to occur right after the heartbeat thus reproducing a characteristic structure of the heartbeat probability distribution. Our model explains RSA by modulation of the vagal input to the sinoatrial node of the heart. By fitting the literature data, we suggest that RSA should be driven by respiratory modulation of vagal cardiac neurons from both inspiratory and expiratory rCPG populations to accurately reproduce the experimentally observed dependence of the average R–R interval duration on the respiratory cycle phase.
P280 Cortical dynamics on multiple time-scales drive growth of smooth maps to- gether with local heterogeneity
Caleb Holt1, Yashar Ahmadian2
1University of Oregon, Department of Physics, Eugene, OR, United States; 2University of Oregon, Institute of Neuroscience, Eugene, OR, United States
Correspondence: Caleb Holt (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P280
The primary visual cortex of higher mammals develops smooth maps for many features of the its neural receptive fields, such as preferred orientation or spatial frequency. Such maps may be beneficial in minimizing wiring lengths between neurons selective to similar feature. Nevertheless, even in visual cortices with smooth maps, the receptive fields of nearby neurons show considerable degrees of heterogeneity. Correspondingly, some receptive field features appear to be uncorrelated between nearby cells, and average signal correlations between nearby cells are near zero. Such a random, “salt-and-pepper” organization may in turn be advantageous in reducing the response redundancy of local cortical populations and increasing their information content. Thus a combination of smooth maps for some features, and salt-and-pepper organizations for others, may provide both the benefits of wiring length minimization and informational efficiency. Previous theoretical models have accounted for the development of cortical feature selectivity and feature maps based on activity dependent Hebbian plasticity. However, these models inevitably predict that the cortex either develops smooth maps for all features (when long-range recurrent cortical excitation is strong) or random distribution of preference for all features (if cortical recurrent excitation is weak); they fail to account for the above observed mixture. We propose that this failure stems in part from the fact that these models do not account for the intrinsic temporal dynamics of the cortex. If properly considered, cortical interactions at slow and fast time scales will couple to the slow and fast features of the stimuli, respectively. We show that, given appropriate forms for the cortical interaction and input correlations at different time scales, this coupling allows for the development of smooth maps for slow stimulus features and random preference distributions for fast features. In particular, by simulating and analyzing one- and two-dimensional topographic models of development of thalamocortical connectivity, we show that our framework can sustain both smooth maps and salt-and-pepper organizations, providing a more biologically plausible mechanism for receptive field feature development.
IBM Research, Tokyo, Japan
Correspondence: Yasunori Yamada (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P281
The apolipoprotein E epsilon 4 allele (APOE-4) is the strongest genetic risk factor for causing sporadic Alzheimer’s disease. Neuroimaging studies have revealed that the APOE-4 allele carriers are different in structural and functional network connectivity. Additionally, resting state functional magnetic resonance imaging studies have shown that these differences can be observed even in the absence of cognitive impairments or even before the onset of brain amyloid accumulation processes. However, how structural changes in the brain affect its activities and functions remains poorly understood. To help give us a better understanding, I built large-scale cortical models based on structural connectivity data from diffusion tensor imaging on aging APOE-4 non-carriers and carriers. Each cortical model consisted of 2.4 million spiking neurons and 4.8 billion synaptic connections. Using these, I simulated resting-state cortical activities, and investigated the distinctive properties observed in vivo at multi-scale levels. As a result, I found that intrinsic cortical activities of both models matched typical patterns and quantitative indices from biological observations. However, the cortical model based on the structural connectivity of the APOE-4 carriers significantly increased the complexity of neural ensembles, and reduced the structural–functional relationship of inter-areal connectivity as well as the functional connectivity. To gain insight into how these differences of intrinsic cortical activities influence cortical information processing, I also investigated the properties of the responses to cortical inputs. I found that the cortical model based on the data of the APOE-4 carriers decreased the degree of cortical responses as well as the number of cortical regions responding to the input compared with the model based on the data of the non-carriers. From these experiments, the results suggest that structural changes in APOE-4 carriers might bring about complex and unstructured intrinsic activities, which might result in reducing cortical information propagation. This computational approach allowing for detailed analyses that are difficult, or impossible in human studies, may help to provide a causal understanding of how structural changes influence cortical information processing.
Philip Maybank1, Ingo Bojak2, Richard G. Everitt1, Ying Zheng3
1University of Reading, Department of Mathematics & Statistics, Reading, United Kingdom; 2University of Reading, Schools of Psychology & Clinical Language Sciences, Reading, United Kingdom; 3University of Reading, Department of Biomedical Sciences & Biomedical Engineering, Reading, United Kingdom
Correspondence: Philip Maybank (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P282
Maybank P, Bojak I, Everitt RG. Fast approximate Bayesian inference for stable differential equation models. arXiv 2017, arXiv:1706.00689 [stat.CO].
Moran RJ, Stephan KE, Seidenbecher T, Pape HC, Dolan RJ, Friston KJ. Dynamic causal models of steady-state responses. NeuroImage 2009, 44(3), 796–811.
Bojak I, Liley DTJ. Modeling the effects of anesthesia on the electroencephalogram. Phys Rev 2005, E 71:1–22.
Bojak I, Stoyanov ZV, Liley DTJ. Emergence of spatially heterogeneous burst suppression in a neural field model of electrocortical activity. Front Syst Neurosci 2015, 9, 18.
Kang S, Bruyns-Haylett M, Hayashi Y, Zheng Y. Concurrent Recording of Co-localized Electroencephalography and Local Field Potential in Rodent. J Vis Exp 2017, 129:e56447.
Aurel A. Lazar, Nikul Ukani, Yiyin Zhou
Columbia University, Department of Electrical Engineering, New York, NY, United States
Correspondence: Nikul Ukani (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P283
We demonstrate that such a model, with a very simple choice of filters, is capable of robust luminance and contrast adaptation and is able to respond reliably and efficiently to stimuli whose intensities vary over order of magnitudes. Further, we demonstrate that the tractability of identifying the filters in the DN model. Although we focused on the early stage of visual processing, every stage down the visual processing pathway, including motion detection, has been shown to be robust at various brightness and contrast levels. Adaptation to mean and variance of the stimuli has been observed in the early olfactory and auditory systems as well. The divisive normalization model that we describe here can be generally applied to the modeling and identification of these systems.
Rieke F, Rudd ME. The Challenges Natural Images Pose for Visual Adaptation. Neuron 2009, 64, 605–616.
Carandini M, Heeger DJ. Normalization as a canonical neural computation. Nature Reviews Neuroscience 2012, 13, 51–62.
Nikolaev A, Zheng L, et al. Network Adaptation Improves Temporal Representation of Naturalistic Stimuli in Drosophila Eye: II Mechanisms. PLOS One 2009, 4(1), 1–12.
Kazuhisa Fujita1,2, Yoshiki Kashimori2
1Komatsu University, Dept. of Clinical Engineering, Komatsu, Japan; 2University of Electro-Communications, Dept. of Engineering Science, Chofu, Tokyo, Japan
Correspondence: Kazuhisa Fujita (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P284
Many neuroscientists have used computer simulation. The accuracy of computer simulation is affected by not only a computational model and a numerical method but also floating point precision used in the simulation. The floating point types in C language are float (single precision), double (double precision), and long double (extended double precision). Generally, without consideration, we will select the double precision floating point for computer simulation. In recent years, a large-scale simulation and a real-time simulation of the neural system has been extensively attempted. For a large-scale simulation and a real-time simulation, not only a supercomputer but also a desktop workstation are used. In a simulation using a desktop workstation, a graphics board or an accelerator board with a GPU achieves acceleration of the simulation. When we perform a simulation using a graphics board, computational time with the single precision floating point is shorter than that with the double precision floating point. Furthermore, using single precision floating point can also reduce data transfer time. If the floating point precision has little effect on the accuracy of a simulation result, we can use single precision without worry and perform an efficiently accelerated simulation. In this study, we investigate the effect of the single precision, the double precision, and the extended double precision floating points on the dynamics of the neuronal activity in the computer simulation.
P285 Decomposing adaptable elements of optokinetic response into cerebellar and non-cerebellar contributions by modeling and cerebellectomy approach
Shuntaro Miki1, Robert Baker2, Yutaka Hirata1
1Chubu University, Robotics Science and Technology, matsumoto-cho 1200 2422, kasugai-shi, Aichi, Japan; 2New York University, Department of Physiology & Neuroscience, 70 Washington Square South,New York, NY, 10012, NY, United States
Correspondence: Shuntaro Miki (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P285
Methods: Goldfish were gently fixated at the center of a white cylindrical water tank with eye coils binocularly sutured on the cornea around the pupils for eye position measurement. Visual stimulus was projected on the wall of the water tank, and rotated in the clockwise direction at 20 deg/s for 8 s and stopped for 8 s repeatedly to generate horizontal OKR. This stimulation was continued for 3 h to induce OKR gain adaptation. Two kinds of experiments were conducted: Acute and chronic cerebellectomy. In the former experiment, normal goldfish (n = 8) underwent the 3-hour OKR training, then the cerebellum was acutely removed. In the latter experiment, cerebellectomy was conducted at least one week before the experiment, and the same 3-hour training was applied (n = 8). From eye velocity data recorded during these experiments, the parameters G1, G0and H of the OKR model implemented on MATLAB Simulink were estimated by fitting the unit step response of the model to the experimental data with a nonlinear optimization method (lsqnonlin function).
Results: In the acute cerebellectomy experiment (Fig. 1b, Left), G1and G0increased during the 3-hour training. H increased for the initial 20 min, but decreased thereafter and reached back to its pre-training value. After acute cerebellectomy, the increased G1and G0went back to their pre-training values. By contrast, the parameter H, which increased and then decreased during the training, increased after acute cerebellectomy. In the chronic cerebellectomy experiment (Fig. 1b, Right), G1and G0did not change during the 3-hour training while H increased gradually and did not show significant decrease unlike in the acute cerebellectomy experiment, and reached to a value comparable to that after acute cerebellectomy.
Conclusion: Changes in both Direct and Indirect components of OKR eye velocity represented by G1and G0in the model are totally cerebellum dependent. By contrast, change in VSM time constant represented by H consists of cerebellar and non-cerebellar contributions. These results suggest that OKR adaptation, specifically the changes in the VSM contains cerebellar and non-cerebellar adaptable elements.
Cohen J. Statistical power analysis for the behavioral sciences (Rev. ed.). Hillsdale, NJ, US: Lawrence Erlbaum Associates, Inc.
Kodama T, du Lac S. Adaptive Acceleration of Visually Evoked Smooth Eye Movements in Mice. Journal of Neuroscience 2016, 36(25), 6836–6849.
Yinyun Li, Zhong Zhang
Beijing Normal University, Department of Management, Beijing, China
Correspondence: Yinyun Li (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(suppl 2):P286
Synaptic plasticity is intrinsically determined by calcium signalling in spines. In addition to the calcium influx into synapse through voltage gated calcium channels (VGCCs) and N-methyl-D-aspartate (NNMDA) receptors, the function of calcium released from internal store in mediating inter-synaptic cross-talk has barely been modeled. This work investigates how different sources of calcium contribute to inter-synaptic cross-talk and synaptic clustering. Based on experimental observations, we developed an abstract mathematical model in one-dimensional system with uniform distribution of spines with the connected dendrite. We modeled the biophysical process of calcium induced calcium release (CICR) in the dendritic smooth endoplasmic reticulum (SER). Our model compared distinct roles of calcium diffusion, back propagated action potentials (bAPs) and CICR played in synaptic clustering and inter-synaptic cross-talk. The simulation result demonstrated that calcium signal extruded from spine into dendrite requires amplification by CICR before invading neighboring spines to induce plasticity. Our model predicted that initial calcium concentration in SER may discriminate between different types of neuronal activity and induce completely different synaptic potentiation and depression.
Kevin Lin, Zhuocheng Xiao
University of Arizona, Department of Applied Mathematics, Tucson, AZ, United States
Correspondence: Kevin Lin (email@example.com)
BMC Neuroscience 2018, 19(suppl 2):P287
A common task in computer modeling of large networks is to collect dynamical statistics like firing rates and correlations elicited by stimuli. This can be computationally expensive if the system at hand is sufficiently complex; the expense is amplified in tasks like parameter estimation and sensitivity analysis, which are necessary when dealing with data and intrinsically involve repeated model runs. This is especially the case for spiking network models, which typically involve interactions over a wide range of scales.
Multilevel Monte Carlo (MLMC) is a class of numerical methods invented to accelerate simulation-based statistical estimation. Originally developed for stochastic differential equation (SDE) models commonly used in, e.g., physics and finance, it has been extended to a variety of settings, including models of stochastic chemical kinetics. The basic idea behind MLMC is to make a fast but potentially biased estimate using large timesteps, then make a correction using smaller numbers of more expensive, small-timestep runs. MLMC is not universally applicable: its effectiveness depends on the underlying dynamics. But for certain types of systems, it can offer great speed-up. In this study, we assess the utility of MLMC for networks of spiking neurons, using a combination of mathematical analysis and numerical tests on prototypical models. Focusing on networks of leaky-integrate-and-fire (LIF) neurons, we have studied MLMC both by analyzing an associated Fokker–Planck equation and by numerical tests. Our main findings are 1) By studying a Fokker–Planck equation for coupling single cells, we found that MLMC is effective under broad conditions. By induction, MLMC is also effective for feed-forward networks. Since efficiency various continuously with parameters, MLMC can be effective for predominantly feed-forward networks. 2) Numerical studies of randomly-connected recurrent networks have shown that the effectiveness MLMC depends strongly on the parameter regime. In particular, for systems operating in a homogeneous, “mean-field”-like regime in which cells are only weakly correlated, we found MLMC to be rather efective. In contrast, for networks operating in partially-synchrnoous regimes, MLMC is less effective. Our results suggest that MLMC may offer significant speed-up for collecting statistics from spiking network models, particularly for predominantly feed-forward networks and for recurrent networks operating in a homogeneous regime. However, in situations where a recurrent network exhibits partial or full synchrony, straightforward extensions of MLMC may not be effective, and more work is required to develop efficient algorithms.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.