- Meeting abstracts
- Open Access
27th Annual Computational Neuroscience Meeting (CNS*2018): Part One
© The Author(s) 2018
- Published: 29 October 2018
Daniel Wolpert1, Mortimer B. Zuckerman2
1University of Cambridge, Department of Neuroscience, UK; 2Columbia University, Mind Brain Behavior Institute, New York, NY, United States
Correspondence: Daniel Wolpert (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):K1
The effortless ease with which humans move our arms, our eyes, even our lips when we speak masks the true complexity of the control processes involved. This is evident when we try to build machines to perform human control tasks. I will review our work on how humans learn to make skilled movements covering probabilistic models of learning, including Bayesian and structural learning as well as the role of context in activating motor memories. I will also review our work showing the intimate interactions between decision making and sensorimotor control processes. This includes the bidirectional flow of information between elements of decision formations such as accumulated evidence and motor processes such as reflex gains. Taken together these studies show that probabilistic models play a fundamental role in human sensorimotor control.
University of Washington, Department of Computer Science & Engineering, Seattle, WA, United States
Correspondence: Rajesh Rao (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):K2
How can the structure of brain circuits inform large-scale theories of brain function? We explore this question in the context of Bayesian models of perception and action, which prescribe optimal ways of combining sensory information with prior knowledge and rewards to enact behaviors. I will briefly review two Bayesian models, deep predictive coding and partially observable Markov decision processes (POMDPs) and illustrate how circuit structure can provide important clues to systems-level computation.
Boston University, Department of Mathematics & Statistics, Boston, MA, United States
Correspondence: Nancy Kopell (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):K3
The neuroscience community is just beginning to understand how brain rhythms take part in cognition and how flexible are the kinds of computations that can be made with rhythms. In this talk, I will discuss some case studies demonstrating this enormous flexibility and important functional implications. Each of the case studies is about some form of coordination. Examples include the interaction of multiple intrinsic time scales in a cortical rhythm in response to a periodic input; the ability of a slow rhythm in the striatum to modulate two other rhythms in different phases of its period; and the ability of a parietal rhythm to guide the formation, manipulation and termination of a kind of working memory.
Brandeis University, School of Life Sciences, Waltham, MA, United States
Correspondence: Eve Marder (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):K4
Experimental work on the crustacean stomatogastric ganglion (STG) has revealed a 2–6 fold variability in many of the parameters that are important for circuit dynamics. At the same time, a large body of theoretical work shows that similar network performance can arise from diverse underlying parameter sets. Together, these lines of evidence suggest that each individual animal, at any moment in its life-time, has found a different solution to producing “good enough” motor patterns for healthy performance in the world. This poses the question of the extent to which animals with different sets of underlying circuit parameters can respond reliably and robustly to environmental perturbations and neuromodulation. Consequently, we study the effects of temperature, pH, hi K+, and neuromodulation on the pyloric rhythm of crabs. While all animals respond remarkably well to large environmental perturbations, extreme perturbations that produce system “crashes” reveal the underlying parameter differences in the population. Moreover, models of homeostatic regulation of intrinsic excitability give insight into the kinds of mechanisms that could give rise to the highly variable solutions to stable circuit performance.
Jan Homann, Michael Berry, Sue-Ann Koay, Alistair M. Glidden, David W. Tank
Princeton University, Department of Neuroscience, Princeton, NJ, United States
Correspondence: Jan Homann (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):F1
Predictions about the future are important for an animal in order to interact with its environment. Therefore, predictive computation might be a core operation carried out by neocortical microcircuits. We explored whether the primary visual cortex can perform such computations by presenting repeated temporal sequences of static images with occasional unpredictable disruptions. Simultaneous recordings of 150–250 neurons were performed using two-photon Ca++ imaging of layer 2/3 neurons labeled with GCaMP6f in awake mice, who were head-fixed but free to run on a styrofoam ball. In our visual stimuli, each spatial frame consisted of either an oriented grating or a random superposition of Gabor filters. We found that most of the neurons (~ 98%) showed a strong reduction in activity over a few repeats of the temporal sequence. When we presented a frame that violated the temporal sequence, these neurons responded transiently. In contrast, a small fraction (~ 2%) had activity that ramped up over several repeats, before reaching a steady, sequence-modulated response. This partitioning of the neural population into ‘transient’ and ‘sustained’ responses was observed for all temporal sequences tested. At the same time, the identity of which neurons were transient versus sustained depended on the temporal sequence.
These features—adaptation to a repeated temporal sequence and a transient response to a sequence violation—are hallmarks of predictive coding. After a few repeats, the temporal sequence becomes predictable and can be efficiently represented by a small subset of the neural population. The unpredictable frame then elicits an ‘error’ signal because it encodes a potentially important novelty. In order to explore whether neural novelty signals could be useful to the animal, we performed behavioral experiments with matched visual stimuli that demonstrated that mice could easily learn to lick in response to a violation of an ongoing temporal sequence.
F2 Response to deep brain stimulation in essential tremor: predictions beyond noisy data with a Wilson-Cowan model
Benoit Duchet1, Gihan Weerasinghe1, Christian Bick2, Hayriye Cagnan1, Rafal Bogacz1
1University of Oxford, Nuffield Department of Clinical Neurosciences, Oxford, United Kingdom; 2University of Oxford, Mathematical Institute, Oxford, United Kingdom
Correspondence: Benoit Duchet (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):F2
Aurel A. Lazar, Chung-Heng Yeh
Columbia University, Department of Electrical Engineering, New York, NY, United States
Correspondence: Chung-Heng Yeh (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):F3
Federica Cappareli, Klaus Pawelzik, David Rotermund, Udo Ernst
University of Bremen, Institute for Theoretical Physics, Bremen, Germany
Correspondence: Federica Cappareli (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O1
In V1, neuronal responses are sensitive to context: responses to stimuli presented within the classical receptive field are modulated by stimuli in the surround. Recently, sparse coding models  have been successful in explaining part of these modulatory effects : Their dynamics implements an inference process to seek an optimal (w.r.t. accuracy and sparseness) representation of a visual input in terms of fundamental features. This is achieved through a competition between similarly tuned neurons with overlapping input fields, which also mediates contextual modulation. However, this connection scheme implies that neurons with non-overlapping input fields do not interact. Therefore, the proposed mechanism does not provide a satisfactory explanation of the mechanisms behind these phenomena, since contextual effects are usually caused by surround stimuli positioned far from the cRF (e.g. Mizobe et al.) 21 report collinear modulation for distance center-surround up to 12 deg). To overcome this limitation, we propose an extension of the classical framework  by defining a new generative model for visual scenes that includes dependencies among different features in spatially well-separated locations. To perform inference in this model, we also derive a dynamical system that can be mapped to a neural circuit and a lateral connection scheme for optimally processing local and contextual information.
This work has been supported by the Creative Unit I-See of the University of Bremen and the BMBF, Bernstein Award Udo Ernst, Grant No. 01GQ1106.
Olshausen BA, Field DJ. Sparse coding with an overcomplete basis set: A strategy employed by V1? 1997
Zhu M, Rozell CJ. Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. 2013
Florencia Iacaruso M., Gasler I.T. Hofer SB. Synaptic organization of visual space in primary visual cortex
Angelucci A, Bijanzadeh M, Nurminen L, Federer F, Merlin, Bressloff PC. Circuits and Mechanisms for Surround Modulation in Visual Cortex. 2017
Gabrielle Gutierrez1, Eric Shea-Brown1, Fred Rieke2
1University of Washington, Department of Applied Mathematics, Seattle, WA, United States; 2University of Washington, Departments of Physiology & Biophysics, Seattle, WA, United States
Correspondence: Gabrielle Gutierrez (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O2
The retina is organized in convergent and divergent layers that compress and expand signals before passing visual information along to the brain. Receptive fields anatomically correspond to the collection of inputs that converge upon a single retinal output cell. This subunit circuit structure produces an information bottleneck because information is compressed along the pathway to an output neuron. We wondered whether the structure of the retina combined with its adaptation properties serve to preserve information given this bottleneck.
A remarkable property of the retina is its ability to adapt its processing to environmental conditions. Adaptation to background luminance shifts the nonlinear response filters of the subunits over a timescale of about a minute. This has the effect of adjusting the linearity of responses in a manner that is dependent on the luminance environment. Another feature of the retina is the diversity of cell types present at the output layer. Within types, there are ON and OFF versions of cell types which have sensitivities that are complementary but not symmetrical. Having complementary cell types combined with adaptation mechanisms may allow the retina to leverage these redundancies under certain conditions while having the flexibility to adapt to an efficient or predictive code in other conditions. We want to know whether the retina adapts its processing to maximize visual information transmission by adjusting the subunit response functions in the circuit. To quantify the amount of information that is preserved in the signals exiting the retina under this kind of set up, we estimate the mutual information between a naturalistic stimulus set and the output from our model retina circuit. We use a binless estimator to account for the fact that the input signals and the outputs are continuous. Consistent with past studies, our preliminary results indicate that the optimal thresholds for the nonlinear subunits depend on the amount of input noise given a naturalistic distribution of stimulus contrasts. Our work builds on past studies by incorporating the known subunit structure into the circuit which produces information compression. Under circumstances where subunits receive independent inputs, rather than correlated inputs, the circuit is optimal when ON and OFF subunits redundantly encode the most prevalent stimuli for a broad range of subunit noise levels. Our preliminary results suggest novel ways in which adaptation mechanisms, along with the particular bottleneck structure of the retina, enable the retina to adapt the computations it produces in different contexts.
O3 Structural and dynamical properties of local cortical networks result from robust associative learning
Danke Zhang, Chi Zhang, Armen Stepanyants
Northeastern University, Department of Physics, Boston, MA, United States
Correspondence: Armen Stepanyants (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O3
Many ubiquitous features characterize the structure and dynamics of local cortical networks. At the level of pair-wise connectivity, it is known that the probabilities of excitatory connections are generally lower than those for inhibitory, and the majority of reported probabilities lies in the 0.10–0.19 range if the presynaptic cell is excitatory and 0.25–0.56 range if it is inhibitory. It is also known that the distributions of connection weights have stereotypic shapes with the majority of measured coefficients of variation (CV) of unitary postsynaptic potentials in the 0.85–1.1 range for excitatory connections and slightly lower values for inhibitory, 0.78–0.96. At the level of connectivity within 3-neuron clusters, several overrepresented connectivity motifs have been discovered. Information becomes scarce as one considers larger clusters of neurons, but even here deviations from random connectivity have been reported for clusters of 3–8 neurons. Similarly, many universal features characterize activity of neurons in local cortical networks. For example, individual neurons exhibit highly irregular spiking activity, resembling Poisson processes with close to one CV in inter-spike-intervals. Spike trains of nearby neurons are only marginally correlated, 0.04–0.15, and, at the network level, spiking activity can be described as sustained, irregular, and asynchronous. In this study, we pursue a hypothesis that associative learning alone is sufficient to explain these network features. To test this hypothesis, we trained recurrent networks of excitatory and inhibitory McCulloch and Pitts neurons [1, 2] on memory sequences of varying lengths and compared network properties to those observed experimentally. Learning in the network is mediated by changing connection weights in the presence of biologically inspired constraints. (1) Input connection weights of each neuron are sign-constrained to be non-negative if the presynaptic neuron is excitatory and non-positive if it is inhibitory. (2) Input weights of each neuron are homeostatically constrained to have a predefinedl1-norm. (3) Each neuron must attempt to learn its associations robustly, so that they can be recalled correctly in the presence of a given level of postsynaptic noise. We explore structural and dynamical properties of associative networks in the space of these constraints, and show that there is a unique region of parameters that is consistent with all of the above-described experimental observations. In this region, local cortical circuits are loaded with associative memories close to their capacity and memories can be successfully retrieved even in the presence of noise comparable to the baseline variations in the postsynaptic potential, which provides an independent validation of the theory in terms of the hypothesized network function. Confluence of these results suggests that many structural and dynamical properties of local cortical networks are simply a byproduct of associative learning.
This work is supported by Air Force grant FA9550-15-1-0398 and NSF grant IIS-1526642.
Chapeton J, Fares T, LaSota D, Stepanyants A. Efficient associative memory storage in cortical circuits of inhibitory and excitatory neurons. PNAS 2012, 109, E3614-3622.
Chapeton J, Gala R, Stepanyants A. Effects of homeostatic constraints on associative memory storage and synaptic connectivity of cortical circuits. Front Comput Neurosci 2015, 9, 74.
Southern Methodist University, Department of Mathematics, Dallas, TX, United States
Correspondence: Kathryn Hedrick (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O4
The theory of attractor neural networks has been influential in our understanding of the neural processes underlying spatial, declarative, and episodic memory. Many theoretical studies focus on the inherent properties of an attractor, such as its structure and capacity. Relatively little is known about how an attractor neural network responds to external inputs, which often carry conflicting information about a stimulus. In this talk I will present analytical results concerning the behavior of an attractor neural network’s response to conflicting external inputs. My focus is on analyzing the emergent properties of the megamap model, a quasi-continuous attractor network in which place cells are flexibly recombined to represent a large spatial environment (Hedrick and Zhang 2016). In this model, the system shows a sharp transition from the winner-take-all mode, which is characteristic of standard continuous attractor neural networks, to a combinatorial mode in which the equilibrium activity pattern combines embedded attractor states in response to conflicting external inputs. I derive a numerical test for determining the operational mode of the system a priori. I then derive a linear transformation from the full model to a reduced 2-unit model that has similar qualitative behavior. The analysis of the reduced model and explicit expressions relating the parameters of the reduced model to the megamap elucidate the conditions under which the combinatorial mode emerges and the dynamics in each mode given the relative strength of the attractor network and the relative strength of the two conflicting inputs. Although my focus on a particular attractor network model, I describe a set of conditions under which the reduced model can be applied to more general attractor neural networks. The reduced 2-unit model captures the amplitude of each activity bump but not its radius. I extend this reduced model to examine the spatial effects on the system’s behavior by approximating the activity bump and recurrent connections using two-dimensional Gaussian tuning curves. Analysis of this reduced model reveals that these spatial effects underlie the nonlinearities observed in the full megamap model but not in the reduced 2-unit model. I compare these results to numerical simulations and electrophysiological data from an experiment in which hippocampal place cells resolve conflicting external inputs from the medial entorhinal cortex (MEC) and lateral entorhinal cortex (LEC) when local and global cues are rotated in opposite directions (Knierim and Neunuebel 2016). In this experiment, place cells in the CA3 (which are believed to form attractor neural networks) coherently follow the noisy inputs from the LEC rather than the much stronger spatial inputs from the MEC. The reduced model predicts that this surprising response is due to three factors: (1) CA3 place cells are initially driven by the LEC input only, (2) the attractor network acts in the WTA mode, and (3) connections from MEC to CA3 are governed by fast Hebbian synaptic plasticity. To bridge the gap between the idealistic theory and the noisy electrophysiological data, I run numerical simulations using the conductance-based integrate and fire model and unsupervised Hebbian plasticity. The noise in the model leads to the partial remapping observed experimentally.
Knierim JJ, Neuneubel JP. Tracking the flow of hippocampal computation: Pattern separation, pattern completion, and attractor dynamics. Neurobiol Learn Mem 2016, 129, 38–49.
O5 Topologies of repetitive functional network motifs vary dynamically with age in the developing human brain: Evidence from very high-dimensional invasive brain signals
Caterina Stamoulis1, Phillip Pearl2
1Harvard Medical School, Faculty of Medicine, Boston, MA, United States; 2Harvard Medical School, Department of Neurology, Boston, MA, United States
Correspondence: Caterina Stamoulis (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O5
Throughout the course of the day, or even an hour, functional brain networks are continuously recruited to process thousands of inputs from the outside world and respond to the demands of countless behaviors and cognitive processes. Across scales of organization, these networks’ small-world and scale-free topologies facilitate optimally efficient neural information processing. However, the building blocks of these networks (modules or motifs), their emergence, re-organization during development and time-dependent stereotypy remain poorly understood. Unrelated theoretical work has shown that specific network patterns emerge as a result of a dynamic system’s propensity towards a stable configuration. There is also growing evidence from both animal and human studies that a relatively small number of such modules are combined (in potentially infinite ways) to give rise to the observed functional network topologies. In this study, we investigated the organization, size and stereotypy of functional network motifs in the developing human brain, using very high-dimensional invasive human electrophysiological signals, collected continuously over long periods of time (typically several days) from a relatively large number of children and young adults (n = 39, age < 1 to ~ 23 years) with intracerebral electrode grids covering different parts of the brain. All patients had recordings from a relatively large number (> 70) of electrodes. Information theoretic and contraction theoretic measures were used to estimate functional connectivity, identify sub-network patterns (motifs) that occurred repetitively over time and independently of the area of the brain being spatially sampled, and characterize their stability (using an eigenvalue analysis).
A relatively small number of functionally active nodes were estimated, which formed stable patterns that occurred repetitively across temporal scales and brain regions. The size of these patterns (number of activated nodes) changed with age, with progressively smaller sub-graphs (3–4 nodes) emerging as a function of neural maturation. Across ages, identified motifs were consistently correlated with network stability. These results indicate the although stable functional network motifs may be in place early in life to process multi-modal sensory information, re-organization of the brain’s neural circuitry as a function of neural maturation may lead to increasingly parsimonious modules to facilitate increasingly efficient neural information processing. These modules may also constitute a network-level biomarker of neural maturation at the macroscale sampled by invasive human recordings.
O6 Revealing principles of cortical computation using the Allen Brain Observatory: A large, standardized calcium imaging dataset from the mouse visual cortex
Michael A. Buice1, Saskia E.J. de Vries1, Gabriel Ocker1, Michael Oliver1, Peter Ledochowitsch1, Daniel Millman1, Eric Shea-Brown2, Christof Koch1, Jianghong Shi2, R Clay Reid1
1Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 2University of Washington, Department of Applied Mathematics, Seattle, WA, United States
Correspondence: Saskia E.J. de Vries (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O6
A prominent question of sensory processing is how information is represented and transformed by the neural circuit through multiple layers and across multiple areas in order to create perceptions and ultimately guide behavior. In order to facilitate uncovering these principles, we have created the Allen Brain Observatory. This is a public dataset of neural responses collected from visual areas of awake mouse cortex using 2-photon calcium imaging. We systematically recorded responses from over 50,0 neurons in over 5 experiments, using a high-throughput imaging pipeline. Data were collected from 6 cortical areas and 4 cortical layers. GCaMP6f was transgenically expressed, driven by 13 different Cre lines which limit expression to specific subsets of excitatory (10 Cre lines) or inhibitory cells (3 Cre lines). Visual responses were imaged in response to an array of both artificial and natural stimuli, including drifting gratings, static gratings, locally sparse noise, natural scenes and natural movies while the mouse was awake and free to run on a running disc. Several metrics were computed to describe the visual responses of the neurons, including orientation and direction selectivity, image selectivity, lifetime sparseness, and receptive field areas. Surveying these metrics across areas, layers and Cre-defined cell populations, several patterns emerge. Layer 4 exhibited clear differences across areas and cell populations, but these differences were reduced in the other layers. This pattern is consistent with layer 4 predominately carrying feedforward thalamocortical input, while layers 2/3, 5 and 6 represent higher order responses. One of the most striking results in this dataset is the small numbers of responsive cells and the remarkable variability of the responses of these cells. Only 57% of cells in the Brain Observatory dataset respond to any of the visual stimuli presented. Further, even responsive cells show large trial-to-trial variability. We fit these neurons to a simple wavelet pyramid model with simple (linear-nonlinear) and complex components (the “energy” model). Roughly 15% of neurons in the dataset show significantly predictable responses to visual stimuli via this model, with relatively low explainable variance. All cells also show some degree of “complex” behavior, i.e. there are no purely “simple” cells according to this model. We compare the representations in each layer and area to responses generated by standard Convolutional Neural Networks, a model derived from the canonical understanding of the cat visual system. We find that the mouse cortex are most similar to early middle areas of ConvNets, rather than the initial Gabor-like layer thought to describe responses in V1 of cats. Finally, we examine the correlation structure of population activity, showing that correlations in neural responses have an impact on information transmission in an area and layer dependent fashion. Furthermore, we show that the “noise” and “signal” correlations are positively correlated throughout the mouse visual system, providing strong evidence against certain types of theories that exhibit “explaining away”, i.e. theories in which neurons with similar mean tuning properties will functionally inhibit one another, such as the sparse coding model of Olshausen and Field and some probabilistic coding models. This dataset provides a testbed for theories of cortical computations and will be a valuable resource for the community.
Louis-David Lord1, Paul Expert2, Robin Carhart-Harris3, Morten Kringelbach1, Joana Cabral4
1University of Oxford, Department of Psychiatry, Oxford, United Kingdom; 2Imperial College London, Centre for Mathematics of Precision Healthcare, London, United Kingdom; 3Imperial College London, Psychedelic Research Group, London, United Kingdom; 4University of Minho, Life and Health Sciences Research Institute (ICVS), School of Medicine, Braga, Portugal
Correspondence: Louis-David Lord (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O7
Brain activity can be understood as the exploration of a dynamical landscape of activity configurations over both space and time. This dynamical landscape may be defined in terms of spontaneous transitions within a repertoire of discrete metastable states of functional connectivity (FC), or “FC states”, which underlie different mental processes. It however remains unclear how the brain’s dynamical landscape might be disrupted in altered states of consciousness, such as the psychedelic state. The present study investigates changes in the brain’s dynamical repertoire in a rare fMRI dataset consisting of healthy participants intravenously injected with the psychedelic compound psilocybin; the active compound in magic mushrooms. We employed a data-driven approach to study brain dynamics in the psychedelic state, which focuses on the dominant FC pattern captured by the leading eigenvector of dynamic FC matrices, and enables the identification of recurrent FC patterns (“FC-states”), and their transition profiles over time. We found that a FC state closely corresponding to the fronto-parietal control system was strongly destabilized by the drug, while transitions toward a globally synchronized FC state were enhanced. These differences between brain state trajectories in normal waking consciousness and the psychedelic state suggest that psilocybin induces an alternative type of unconstrained functional integration at the expense of locally segregated activity specific networks supporting executive function. These results provide a mechanistic perspective on the acute psychological effects of psychedelics, and further raise the possibility that mapping the brain’s dynamical landscape may help guide pharmacological interventions in neuropsychiatric disorders.
University of Iowa, Caltech, Iowa City, IA, United States
Correspondence: Christopher Kovach (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O8
Interest in the origin and significance of cross-frequency coupling in electrophysiological signals has grown rapidly over the last several years, with particular emphasis on phase-amplitude coupling (PAC). Much of this recent attention has focused on measures of PAC obtained from filtered analytic signals through the comparison of phase and analytic envelope. As use of these measures has increased, so has an appreciation of their ambiguities, attested by an expanding cautionary literature on the topic. Meanwhile, “classical’’ statistically motivated measures of cross-frequency coupling derived from spectral representations of higher moments have remained at the periphery of the latest surge of attention, due in large part to a common perception that such measures are comparatively difficult to interpret and that they relate to a form of cross-frequency coupling distinct from PAC. Recently, we have shown that common PAC measures are, in fact, fundamentally normalized bispectral estimators which yield smoothed estimates of the true signal bispectrum . Differences between the measures relate to properties of the respective smoothing kernels. In light of this observation, classical bispectral estimators can claim a number of advantages over recently introduced PAC measures, including more favorable bias properties and freedom from the constraints on range and resolution that are inherent in PAC measures. Interpretation of the bispectrum is commonly explained in terms of ``quadratic’’ phase coupling between spectrally narrow signal components; in demonstrating the relationship to PAC measures, we develop an alternative approach to interpretion through a decomposition of the signal into spectrally broad transient components. The relationship between PAC measures and the bispectrum can be understood by considering the case of a low-frequency transient, corresponding to the ``slow’’ oscillation (SO), accompanied by a transiently windowed high-frequency ``fast’’ oscillation (FO). As detailed in Figures 1 and 2 of reference , windowing of the FO at the scale of the SO implies that the bispectrum contains a straightforward representation of the spectrum of the SO and the power spectrum of the FO, from which both might be directly recovered to good approximation. Moreover, within the range of the FO, the phase bispectrum encodes the relative delay between the SO and the FO modulating window. With these insights we develop guidelines for the evaluation of PAC from bispectral statistics. This framework addresses a number of the recently identified limitations and ambiguities of PAC measures. Finally, some extensions of this framework towards the blind recovery of recurring transient signal features are briefly considered. The feasibility of this application is demonstrated through the identification of auditory evoked responses in human intracranial recordings from both controlled stimuli (click trains) and uncontrolled ecologically meaningful stimuli (a video soundtrack) with no foreknowledge of the stimulus.
Kovach CK, Ova H, Kawasaki H. The bispectrum and its relationship to phase-amplitude coupling. Neuroimage 2018, 173, 518–539
Ilya Rybak, Simon Danner, Natalia Shevtsova
Drexel University College of Medicine, Department of Neurobiology and Anatomy, Philadelphia, PA, United States
Correspondence: Ilya Rybak (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O9
To effectively move in a complex and dynamic environment, limbed animals should vary locomotor speed and adapt gaits to the desired speed and the environment. With increasing locomotor speed, quadrupedal animals, including mice, switch locomotor gait from walk to trot and then to gallop and bound. Centrally, the locomotor gaits are controlled by interactions between four central pattern generators (CPGs) located on the left and right sides of the lumbar and cervical enlargements of the cord, and each producing rhythmic activity controlling one limb. The activity of these CPGs are coordinated by commissural interneurons (CINs), projecting across the midline to the contralateral side of the cord, and by long propriospinal neurons (LPNs) that connect the cervical and lumbar CPG circuits in both directions. We use computational modeling to investigate how the CIN and LPN connections between the cervical and lumbar, left and right CPGs can be organized and what roles different CIN and LPN pathways play in the control and speed-dependent expression of different gaits. Our model contains four rhythm generators (RGs) with left–right cervical and lumbar CIN interactions and homolateral and diagonal ascending and descending LPN interactions. These interactions are organized via several interneuronal pathways mediated by genetically identified neuron types and are based on their suggested functions and connectivity. Supraspinal (brainstem) drives excite all RGs, thereby controlling oscillation frequency, and inhibit some CINs and LPNs, which allows the model to reproduce the speed-dependent gait transitions observed in the intact mice . The model reproduces the experimentally observed loss of particular gaits after selective removal of genetically identified neurons (V2a, V0 V, or all V0) and the speed-dependent disruption of hind limb coordination after deletion of ascending (cervical-to- lumbar) LPNs . The model suggests that (1) V0Dand V0VCINs together secure left–right alternation, whereas V3 CINs promote left–right synchronization, and that (2) V0DLPNs support diagonal alternation, whereas V0VLPNs promote diagonal synchronization. Thus, V0DCINs and LPNs together stabilize walk and V0VCINs and LPNs stabilize trot. The transition from trot to gallop and bound occurs when the activity of V3 CINs overcomes the activity of (brainstem-drive inhibited) V0VCINs and diagonal LPNs. Our simulations have also shown that external inputs to CINs and LPNs, other than supraspinal drives controlling locomotor frequency, can induce gait changes independent of speed. These inputs may represent activities of sensory afferents, which is consistent with multiple experimental data showing that CINs and LPNs receive direct and indirect inputs from sensory afferents. Based on the results of these simulations we suggest that CINs and LPNs represent the main neural targets for different local/intraspinal, supraspinal, and sensory inputs to control interlimb coordination and adjust locomotor gait to various internal and external conditions. The model proposes a series of testable predictions, including the anticipated effects of the deletion of particular identified types of CINs and LPNs, and can be used as a test bed for simulating various spinal cord perturbations and injuries.
Bellardita C, Kiehn O. Phenotypic Characterization of Speed-Associated Gait Changes in Mice Reveals Modular Organization of Locomotor Networks. Current Biology 2015, 25:1426–1436
Ruder L, Takeoka A, Arber S. Long-Distance Descending Spinal Neurons Ensure Quadrupedal Locomotor Stability. Neuron 2016, 92:1063–1078.
Yury Sokolov, Jonathan Rubin
University of Pittsburgh, Department of Mathemathics, Pittsburgh, PA, United States
Correspondence: Yury Sokolov (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O10
Network (population) bursts are a signature neuronal activity in a critical brainstem region for respiratory rhythm generation, the pre-Botzinger complex (pre-BotC). During the initiation of a network burst, the pre-BotC shows a consistent pattern of dynamic transitions. Starting with mostly silent neurons, the pre-BotC transitions to an intermediate state with a positive fraction of firing neurons that may include tonically spiking and bursting neurons. When a sufficient number of neurons becomes engaged in firing, the pre-BotC network finally undergoes a transition to a population burst, characterized by a high fraction of simultaneously bursting neurons.
Over the last few decades several models of population bursts in the pre-BotC have been proposed, including conductance-based models featuring various ionic currents, such as INaP and ICAN. While the main objective of these models was to identify the bio-physical driving sources underlying network burst initiation, the role of the synaptic connection patterns in shaping neuronal activity has been relatively overlooked. The main reason for this omission is that the models are too complicated for a full analytical treatment and, due to computational limitations, it is difficult to gain full insight into the influence of connectivity. To overcome these obstacles, we propose a simplified model, which is based on a bootstrap percolation process, and is defined as follows. For a given graph, every node has three possible states: inactive, weakly- active, and fully-active, which correspond to silence, tonic spiking and bursting, respectively. We initialize all nodes to the weakly-active state with probability p1 and to the fully-active state with probability p2, independently of other nodes. As the process evolves, an inactive node will transit to the weakly-active state if the amount of activity among its neighbors exceeds a threshold k1, and if the amount is greater than k2, it will transit to the fully-active state. Similarly, a weakly-active node becomes fully-active if the amount of activity among its neighbors exceeds k2. Nodes cannot reduce their activity levels, and those nodes that are fully-active will not change their states until the end of a trial. We analyze this process analytically and computationally on various random graph models and address three questions. First, we determine values p1 and p2 as functions of k1 and k2 for which the network reaches a population burst at the end of a trial. Our findings suggest possible reasons why the network may fail to generate a population burst after the deletion of a fixed fraction of arbitrary nodes in the network, which is consistent with laser ablation of rhythmogenic pre-BotC (Dbx1) neurons in experiments. Second, we investigate how structural features of different graph models affect the duration of the process. Lastly, we describe how using nodal measures we may identify nodes that, when activated initially, are particularly well suited to ignite a population burst. This result shows that local properties of graphs are good descriptors of the spread of bursting activity and also addresses the extent to which successive population bursts may feature similar or different initiation mechanisms.
Lyle Muller, Terrence Sejnowski
Salk Institute for Biological Studies, Computational Neurobiology Laboratory (CNL), La Jolla, CA, United States
Correspondence: Lyle Muller (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O11
With new multichannel recording technologies, neuroscientists can now record from single cortical regions with high spatial and temporal resolution. Early recordings during anesthesia found spontaneous and stimulus-evoked waves traveling across single cortical regions. For a long time, however, these waves were thought to disappear in awake animals and during high-input regimes. By introducing new signal processing methods for moment-by-moment detection and characterization of spatiotemporal patterns under noise, our recent work has found that small visual stimuli evoke waves traveling out from the point of thalamocortical input to primary visual cortex in the awake monkey . Further, using a measure of directed information transfer across recording sites in V1 of anesthetized monkey, another group has found that traveling waves can influence intracortical dynamics during viewing of natural stimuli . These results indicate that traveling waves can play a role in organizing neural activity during natural sensory processing. Their overall computational role in sensory cortex, however, remains poorly understood. Here, we introduce a spiking model that captures a general network-level mechanism for traveling waves in cortex. We study networks in the self-sustained activity regime , where conductance-based networks of neurons can create an internally generated noise  consistent with the irregular-asynchronous (IA) background activity state in cortex . We find that a microscopic property—the axonal conduction velocity—profoundly controls the spatiotemporal structure of the spontaneous background state. While previous work has generally considered the time delays from intraregional recurrent fibers to be negligible, these can range up to tens of milliseconds over a few millimeters of the cortical surface, and their inclusion shapes self-sustained activity patterns into spontaneous traveling waves matching those observed in recordings from cortex. By studying networks from 104to 106neurons through a range of connectivity regimes, from very sparse (1 synapses/cell) to that found in cortex (10,0 synapses/cell, ), we identify spatiotemporal patterns ranging fromdense waves, where the fraction of individual neurons participating in a passing wave is nearly unity, tosparse waves, where this fraction becomes very low. The sparse wave regime offers a unique operating mode, where many waves can coexist while weakly interacting during their propagation across the network. Finally, in collaboration with the laboratory of John Reynolds (Salk Institute), we show how spontaneous, sparse traveling waves can affect visual processing in the awake marmoset, leading to dynamic shifts in perceptual thresholds.
Muller L, Reynaud A, Chavane F, Destexhe A. The stimulus-evoked population response in visual cortex of awake monkey is a propagating wave. Nature Communications 2014, 5.
Besserve M, Lowe SC, Logothetis NK, et al. Shifts of Gamma Phase across Primary Visual Cortical Sites Reflect Dynamic Stimulus-Modulated Information Transfer. PLoS Biology 2015, 13.
Kumar A, Schrader S, Aertsen A, Rotter S. The High-Conductance State of Cortical Networks. Neural Computation 28, 20(1), 1–43.
Destexhe, C. Neuronal computations with stochastic network states. Science 26, 314.
Brunel. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. Journal of Computational Neuroscience 20, 8.
Braitenberg, S. Cortex: Statistics and Geometry of Neuronal Connectivity. Springer Press, 1998.
Daniel Levenstein1, György Buzsáki1, John Rinzel2
1New York University, Neuroscience Institute, New York, NY, United States; 2New York University, Center for Neural Science & Courant Institute of Mathematical Sciences, New York, NY, United States
Correspondence: Daniel Levenstein (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O12
During non-rapid eye movement (NREM) sleep, the neocortex continuously alternates between states of neuronal spiking (UP states) and inactivity (DOWN states). Similarly, the hippocampus also shows continuous alternations between brief periods of neuronal activity (SPW-Rs) and relative inactivity. While the durations of active/inactive states are dramatically different in the two regions, the hippocampus and neocortex are both cortical tissue and are under similar neuromodulatory influence during NREM. Thus, it prompts one to wonder whether the neocortical UP/DOWN states and hippocampal SPW-Rs might be explained by similar mechanisms. Furthermore, the mechanisms by which alternation dynamics in the two regions interact to support NREM function are unclear. To address these questions, we used an idealized firing rate model of UP/DOWN alternations with four distinct dynamical regimes, which are distinguished by the stability or transience of UP/DOWN states and encompass those seen in previous studies. By directly matching model dynamics with experimental observations in naturally-sleeping rats, we found that the alternation dynamics observed in neocortex and hippocampus during NREM reflect two distinct regimes of excitable activity that show characteristically asymmetric durations of UP/DOWN states. Specifically, we find that the neocortical dynamics reflect a stable UP state interrupted by transient DOWN states (slow waves), while the hippocampal dynamics reflect a stable DOWN state with transient UP states (sharp waves).We further considered the effects of including an inhibitory population in the model. We find that under conditions of balanced excitation and inhibition, neocortical UP- > DOWN transitions can be evoked by excitatory input and are followed by a high frequency oscillation at the DOWN- > UP transition, as is observed in vivo. We propose that during NREM sleep, hippocampal and neocortical populations are in excitable states, from which small fluctuations can evoke the transient events that support NREM function. The excitable dynamics we describe suggest a mechanism by which the two structures could show a form of communication through“stochastic synchronization” of spontaneous population events during NREM sleep.
O13 Biological mechanisms for learning: A computational model of olfactory learning in the Manduca sexta moth
Charles Delahunt1, Jeffrey Riffell2, J. Nathan Kutz1
1University of Washington, Deparment of Applied Mathematics, Seattle, WA, United States; 2University of Washington, Department of Biology, Seattle, WA, United States
Correspondence: Charles Delahunt (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O13
The moth olfactory network, which includes the antennal lobe (AL), mushroom body (MB), and ancillary structures, is a relatively simple biological neural system that is capable of learning. Its structural features include motifs that are widespread in biological neural systems, such as a cascade of networks, large dimension shifts from stage to stage, sparsity, noise, and randomness. Learning is enabled by a neuromodulatory reward mechanism of octopamine stimulation of the AL, whose increased activity induces rewiring of the MB through Hebbian plasticity. The goal of this work is to analyze how these various components interact to enable learning. To this end, we build a computational model of the moth olfactory network, including the dynamics of octopamine stimulation, which is closely aligned with the known biophysics of the AL-MB and with in vivo AL firing rate data of moths during learning. To our knowledge this is the first full, end-to-end neural network model that demonstrates learning behavior while also closely matching the structure and behavior of a particular biological system. The model is able to robustly learn new odors, and provides a valuable tool for examining the role of octopamine in learning. This octopamine mechanism during learning is of particular interest, since how it promotes the construction of new codes in the MB is not understood. Specifically, our experiments elucidate key biological mechanisms for fast learning from noisy data that rely on an interaction between cascaded networks, sparsity, Hebbian plasticity, and neuromodulatory stimulation by octopamine.
Natalia Maksymchuk, Atit Patel, Nathaniel Himmel, Daniel Cox, Gennady Cymbalyuk
Georgia State University, Neuroscience Institute, Atlanta, GA, United States
Correspondence: Natalia Maksymchuk (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O14
Intracellular Ca2+ concentration usually correlates with the neuronal pattern and behavioral response. However, noxious cold sensation in Drosophila presents a paradox with these associations. Pkd2 and Trpm channels are required to trigger nociceptive full body contraction (CT) under acute cold .Trpm mutants exhibit an increase in [Ca2+] levels above control and display reduction of CT behavior, whereasPkd2mutants showed reductions in [Ca2+] level and inhibition of behavior . We developed a Hodgkin-Huxley-type model of the cold sensitive CIII neurons to investigate interaction of Pkd2, Trpm and SK currents and to explain the experimental paradox. Our main mechanism assumes that the mutation of Trpm is homeostatically accompanied by a compensatory increase of the total Pkd2 current conductance, which leads to an amplified rise of [Ca2+] under noxious cold temperatures. This higher [Ca2+] activates stronger SK current which hyperpolarizes the membrane potential and suppresses spiking. This leads to inhibition of the stereotyped CT behavior under noxious cold stimuli. This model prediction is supported by the experiments, which showed 2-fold increase ofPkd2 mRNA levels in Trpm mutants relative to control, while no change in Trpm mRNA levels was observed inPkd2mutants.
Basic models of the CIII neuron describing responses of Control, Trpm and Pkd2 mutants show transitions from silence at room temperature to spiking activity below 18 °C, but have distinct features. Models of Control and Trpm mutants reach a maximum spike frequency near 14.5 °C, while Pkd2 mutants exhibited a maximum frequency at 6 °C and had a smaller frequency compared to Control and Trpm mutants. The decrease of maximum frequency in Pkd2 mutants as well as absence of spiking activity for most of the temperature range in Trpm mutants may explain the inhibition of CT behavior under noxious cold. The [Ca2+] responses of the three models describing control, Trpm and Pkd2 mutants are in agreement with the corresponding experimental data . [Ca2+] signal of CIII neurons under noxious cold is the strongest in Trpm mutants and the weakest in Pkd2 mutants. Thus, the model and experimental results suggest that cold-evoked CT behavior is tuned to an optimal Ca2+ level which does not always functionally represent level of neuronal excitation. Also, the basic model currently exhibits a wide spectrum of qualitatively different activity regimes. Depending on the parameter set, the model could show different regimes which are associated with different levels of [Ca2+] and could be arranged into an alternative scheme of the temperature coding following the sequence of transitions between regimes: small amplitude spiking, period doubling cascade, bursting, large amplitude spiking, and rest state along with the temperature going down. These two coding schemes provide robust and generic mechanisms of coding modality-specific activity patterns by coordinated modality-specific activation of two TRP currents.
This research was supported by NIH grant NS086082 and a GSU Brains and Behavior Seed Grant (DNC), N.H. is a Brains and Behavior and Honeycutt Fellow; A.A.P. is a 2CI Neurogenomics and Honeycutt Fellow.
Turner HN, Armengol K, Patel AA, et al. The TRP Channels Pkd2, NompC, and Trpm Act in Cold-Sensing Neurons to Mediate Unique Aversive Behaviors to Noxious Cold in Drosophila. Current Biology 2016, 26(23), 3116–3128.
Louis Kang1, Vijay Balasubramanian2
1University of California, Berkeley, Redwood Center for Theoretical Neuroscience, Berkeley, CA, United States; 2University of Pennsylvania, Computational Neuroscience Initiative, Philadelphia, PA, United States
Correspondence: Louis Kang (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O15
O16 Simulating in vivo context-dependent recruitment of CA1 hippocampal interneuron specific 3 (IS3) interneurons
Alexandre Guet-McCreight, Frances Skinner
1Krembil Research Institute, Division of Fundamental Neurobiology, Toronto, Canada
Correspondence: Alexandre Guet-McCreight (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O16
Obtaining recordings from individual cells during behaviour is technically challenging, especially for the diverse interneuron subtypes that tend to be smaller, less accessible, and less identifiable relative to excitatory cells. As such, it is difficult to determine inhibitory cell contributions but it is clear that consideration of interneuron subtypes is critical to understanding brain function and behavior . To address this, we use computational approaches. We focus on the hippocampal CA1 interneuron specific 3 (IS3) cell, a cell type that has not yet been recorded from in vivo. Notably, though IS3 cells represent a small fraction of interneurons in CA1 hippocampus, they possess unique circuitry properties in that they only inhibit other inhibitory neurons, such as Oriens Lacunosum Moleculare (OLM) interneurons. In vitro, photo-activation of IS3 cells at theta frequencies has been shown to elicit theta-timed spiking in OLM cells . To explore the potential contributions of IS3 cells during in vivo contexts, we use multi-compartment IS3 cell models to generate predictions of input populations that could either enhance or dampen IS3 cell activities during behavior. We have developed data-driven multi-compartment models of IS3 cells with active dendritic properties , determined realistic synaptic parameters along the dendritic morphology of the models , and estimated numbers of active synapses and presynaptic spike rates to generate in vivo-like states for IS3 cell models. Here, we consider context-dependent recruitment of IS3 cells during simulated states of theta rhythms and sharp-wave associated ripples (SWRs). During these states, we use our models to predict the contributions of different presynaptic inhibitory and excitatory input populations.
Our results show that excitatory theta-timed inputs from CA3 and entorhinal cortex can modulate the timing of IS3 cell spiking during theta rhythms. Moreover, depending on their relative contributions, the timing of the IS3 cell model’s spiking can occur anywhere between the rising phase and peak of the theta cycle. As well, we show that inhibitory inputs can dampen spike recruitment of IS3 cells regardless of phase, though less so for inhibitory inputs that are the most antiphase relative to excitatory inputs. For our simulated SWR context, we show that transiently bursting CA3 inputs alone are sufficient to recruit the IS3 cell model to spike. We also show that the presence of feedforward inhibition on the proximal dendrites of the model can sufficiently dampen IS3 cell spiking during a SWR context. In summary, we have simulated in vivo-like contexts where IS3 cell spike recruitment can be either enhanced or dampened. Our results highlight possible IS3 cell spiking scenarios and thus their potential contributions to brain function and behavior.
Guet-McCreight A, et al. Using A Semi-Automated Strategy To Develop Multi-Compartment Models That Predict Biophysical Properties Of Interneuron Specific 3 (IS3) Cells In Hippocampus. eNeuro. 2016, 3(4). pii: ENEURO.87-16.2016.
Guet-McCreight A, et al. F10Research 2017, 6:1552 (poster).
Kepecs A, Fishell G. Interneuron cell types are fit to function. Nature 2014, 505(7483):318–26.
Tyan L, et al. Dendritic inhibition provided by interneuron-specific cells controls the firing rate and timing of the hippocampal feedback inhibitory circuitry. J Neurosci. 2014, 34(13):4534–47.
O17 Quantitative simplification of detailed microcircuit demonstrates the limitations to common point-neuron assumptions
Christian A Rössert1, Giuseppe Chindemi1, Andrew Davison2, Dimitri Rodarie1, Nicolas Perez Nieves3, Christian Pozzorini1, Csaba Eroe1, James King1, Taylor Newton1, Max Nolte1, Srikanth Ramaswamy1, Michael Reimann1, Willem Wybo1, Marc-Oliver Gewaltig1, Wulfram Gerstner1, Henry Markram1, Idan Segev4, Eilif Muller1
1École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland; 2CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France; 3Imperial College London, Department of Physics, London, United Kingdom; 4Hebrew University of Jerusalem, Department of Neurobiology, Jerusalem, Israel
Correspondence: Eilif Muller (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O17
A first-draft detailed simulation of a piece of the rat neocortex has recently been reported by an international collaboration . This work integrated the current state of experimental knowledge on the detailed 3D anatomy and physiology of the various neuron types, and their synaptic properties and connectivity, and was shown to reproduce findings from a range of in vivo experiments reported in the literature without parameter tuning. On the other hand, for large-scale network simulations, point-neuron models are typically used for describing and analyzing network dynamics and functions. The properties and connectivity structure of point neuron models generally are not constrained by biological data and thus use ad hoc simplifying assumptions. This makes some of the mathematically tractable models somewhat disconnected from experimental neuroscience. To bridge the gap between these two extremes (the detailed and the oversimplified), we aimed to derive point-neuron network models from data-driven detailed network models in an automated, repeatable and quantitatively verifiable manner. The simplification occurs in a modular workflow, in an in vivo-like state. First, synapses are displaced from dendrites to the soma while correcting for dendritic filtering using low-pass filters for the synaptic current numerically calibrated for each dendritic compartment. Next, point-neuron models for each neuron in the microcircuit are fitted to their respective morphologically detailed counterparts. Here, generalized integrate-and-fire point neuron models are used, leveraging a recently published fitting toolbox . The fits are constrained by currents and voltages computed in the morphologically detailed reference neurons with soma-displaced synapses, as described above. Benchmarking the simplified network model to the detailed microcircuit model for a range of simulated in vivo and in vitro protocols, we found good agreement for both quantitative and qualitative aspects. Our automated approach not only makes it possible to continuously update the simplified circuit as the detailed network integrates new data, but the modularity of the simplification process also makes it applicable to other point neuron and synapse models, network models, and simulators. In addition to providing an extensive assessment of validity for carefully reduced point neuron network models, our approach is fundamentally important and informative, in particular in cases when network functionalities are lost during the simplification pipeline. By taking the simplification further to evaluate common simplifying assumptions, we further illustrate the contributions of specific synaptic and cellular dynamics to the overall response of the detailed network, revealing limitations for several common approaches.
Markram H, Muller E, Ramaswamy S, et al. Reconstruction and Simulation of Neocortical Microcircuitry. Cell 2015, 163(2), 456–492.
Pozzorini C, Mensi S, Hagens O, et al. Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models.PLOS Computational Biology 2015, 11(6)
Christian Ebner1, Claudia Clopath2, Peter Jedlicka3, Hermann Cuntz4
1Ernst Strüngmann Institute, Frankfurt, Germany; 2Imperial College London, Department of Bioengineering, London, United Kingdom; 3Justus Liebig University, Faculty of Medicine, Giessen, Germany; 4Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt/Main, Germany
Correspondence: Peter Jedlicka (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O18
Numerous experiments have been conducted in the past in order to monitor the complex interactions that drive activity-dependent long-term plasticity of synapses. Spike timing, firing rate and synaptic location have been found to be important factors that dynamically contribute to the outcomes of plasticity induction protocols. While several theoretical models that implement plasticity rules already exist, they have not yet been used in depth to study plasticity in neuron models with detailed morphology. Here, we extend previous phenomenological voltage-based plasticity rules by developing a new framework based on three signaling pathways. We apply it to a L5 pyramidal cell model with active dendritic properties and realistic propagation of voltage. We show that our novel rule not only reconciles outcomes of several experiments but also predicts spatiotemporal patterns of plasticity that are characteristic for individual stimulation protocols and their impact on local processes at the synapse, including protocols inducing local plasticity in tuft dendrites. Due to this focus on local voltage signals, our framework can explain synaptic plasticity in the absence of postsynaptic action potentials, as suggested in recent studies. We thereby link experimental results that would intuitively seem to require entirely different rules, showing that a unifying rule might explain the vast majority of experiments in cortical pyramidal cells if key biophysical pathways are taken into account. Ultimately, we can now study how the cell-type specific electrotonic properties can explain differences in emerging plasticity by incorporating our plasticity rule in a variety of existing detailed compartmental models such as models of hippocampal pyramidal or granule cells. To summarize, a simple plasticity rule that utilizes pre- and postsynaptic plasticity pathways can explain experimental results with a large variety of induction protocols when the plasticity rule is incorporated in the compartmentalized structure of a detailed dendritic model.
O19 Assisted construction of hybrid circuits: making easy the implementation and automation of interactions between living and model neurons
Manuel Reyes-Sanchez, Irene Elices Ocon, Rodrigo Amaducci, Francisco B Rodriguez, Pablo Varona
Universidad Autónoma Madrid, Ingeniería Informática, Madrid, Spain
Correspondence: Irene Elices Ocon (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O19
Closed-loop interactions with the nervous system are a powerful approach to characterize neural dynamics and control network functions [1, 2]. In particular, neuron models can interact with living neurons in hybrid circuits once proper adaptation is achieved in both directions [3, 4]. Such adaptations are not easy to accomplish in a manual trial-and-error process, and are better determined with closed-loop protocols based on real-time event detection  and well-defined interaction goals and performance measurements. This work presents a set of algorithms for the assisted construction of hybrid circuits. These algorithms have been implemented in RTHybrid, an open-source cross-platform real-time model library . Our real-time algorithms for assisted construction of hybrid circuits are based in a general closed-loop paradigm designed to be modular and effective. The algorithms perform as a function of their online measured input parameters the following tasks: (1) temporal and amplitude scaling, (2) drift compensation, (3) synaptic tuning/calibration, (4) model turning/calibration, (5) automatic activity control, (6) automatic mapping of the dynamics. The temporal and amplitude scales are evaluated and matched online to create compatible working regimes between the model and living neurons . All protocols use three steps: event detection, activity and connection characterization and target performance evaluation. The events detected online include: spikes, bursts, hyperpolarization intervals, voltage ranges, temporal structures, phases, etc. The interaction characterization measures include event timings, instantaneous periods, synchronization levels, target phases, and working/dynamic range assessments. When the interaction goal is not fulfilled, the target evaluator algorithm changes in an informed and automatic manner the parameters of the hybrid circuit. Our algorithms have been validated in a hybrid circuit to study the presence of dynamical invariants in CPGs. In conclusion, hybrid circuits require experiment-specific adaptations to work properly, and the parameters of the implementation must be evaluated dynamically on each preparation and even adapted during the same experiment. These algorithms can also be used to automatically map the parameter space to achieve a given goal, and in general to control/explore/unveil bifurcations and circuit dynamics.
We acknowledge support from MINECO/FEDER DPI2015-65833-P, TIN2014-54580-R, TIN2017-84452-R (http://www.mineco.gob.es/) and ONRG grant N62909-14-1-N279.
Chamorro P, Muñiz C, Levi R, Arroyo D, Rodríguez FB, Varona P. PLoS ONE 2012, 7(7).
Elices I, Varona P. 2015, 170, 55–62.
Ambroise M, Buccelli S, Grassia F, Pirog A, Bornat Y, Chiappalone M, et al. Artif. Life Robot 2017, 22, 398–403.
Reyes-Sanchez M, Elices I, Amaducci R, Muñiz C, Rodríguez FB, Varona P. BMC Neuroscience 2017, 18 (Suppl 1):P281 (CNS 2017).
Varona P, Guardeño DA, Nowotny T, Rodríguez FB. 2017. Online event detection requirements in closed-loop neuroscience. In Closed Loop Neuroscience (pp. 81–91).
Amaducci R, Muñiz C, Reyes-Sanchez M, Rodríguez FB, Varona P. BMC Neuroscience 2017, 18 (Suppl 2):P104 (CNS 2017).
Elices I, Arroyo D, Levi R, Rodríguez FB, Varona P. BMC Neuroscience 2017, 18 (Suppl 1):P282 (CNS 2017).
Oltman De Wiljes1, Ronald Van Elburg2, Fred Keijzer1
1University of Groningen, Theoretical Philosophy, Groningen, Netherlands; 2University of Groningen, Faculty of Science and Engineering, Groningen, Netherlands
Correspondence: Oltman De Wiljes (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):O20
O21 Community models as the ultimate objective (and success) of computational neuroscience: exempli gratia: The cerebellar Purkinje cell
Southern Oregon University, Department of Biology, Ashland, OR, United States
Correspondence: James Bower (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):O21
P1 MRI2MRI: A fully convolutional deep artificial network algorithm that accurately transforms between brain MRI contrasts
Ariel Rokem1, Sa Xiao2, Yue Wu2, Aaron Lee2
1University of Washington, eScience Institute, Seattle, WA, United States; 2University of Washington, Department of Ophthalmology, Seattle, WA, United States
Correspondence: Ariel Rokem (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P1
Philipp Weidel1, Jakob Jordan2, Abigail Morrison1
1Jülich Research Centre, Institute for Advanced Simulation (IAS-6), Juelich, Germany; 2University of Bern, Department of Physiology, Bern, Switzerland
Correspondence: Philipp Weidel (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P2
Since the enormous breakthroughs in machine learning over the last decade, functional neural network models are of growing interest for many researchers in the field of computational neuroscience. One major branch of research is concerned with biologically plausible implementations of reinforcement learning, with a variety of different models developed over the recent years. However, most studies in this area are conducted with custom simulation scripts and manually implemented tasks. This makes it hard for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures.
In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. This toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of a curated set of standardized environments of varying complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator , successfully training it on two different environments from the OpenAI Gym  and comparing its performance to a previously known model of reinforcement learning in the basal banglia  and a standard Q-learning algorithm .
We acknowledge partial support by the German Federal Ministry of Education through our German-Japanese Computational Neuroscience Project (BMBF Grant 01GQ1343), EuroSPIN, the Helmholtz Alliance through the Initiative and Networking Fund of the Helmholtz Association and the Helmholtz Portfolio theme “Supercomputing and Modeling for the Human Brain” and the European Union Seventh Framework Programme (FP7/2007–2013) under grant agreement no. 604102 (HBP). All network simulations carried out with NEST (http://www.nest-simulator.org).
Gewaltig, MO, Diesmann M. NEST (NEural Simulation Tool). Scholarpedia 2007, 2(4), 1430
Brockman G, Cheung V, Pettersson L, et al. (2016). OpenAI Gym. ArXiv e-prints
Jitsev J, Morrison A, Tittgemeyer M. “Learning from positive and negative rewards in a spiking neural network model of basal ganglia.” Neural Networks 2012, (IJCNN).
Tesauro G. Temporal difference learning and TD-Gammon. Communications of the ACM 1995, 38(3), 58–68.
P3 Reproducing polychronization: a guide to maximizing the reproducibility of spiking network models
Robin Pauli1, Philipp Weidel1, Susanne Kunkel2, Abigail Morrison1
1Jülich Research Centre, Institute for Advanced Simulation (IAS-6), Juelich, Germany; 2Norwegian University of Life Sciences, Faculty of Science and Technology, Ås, Norway
Correspondence: Philipp Weidel (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P3
Any modeler who has attempted to reproduce a spiking neural network model from its description in a paper has discovered what a painful endeavor this is. Even when all parameters appear to have been specified, which is rare, typically the initial attempt to reproduce the network does not yield results that are recognizably akin to those in the original publication. Causes include inaccurately reported or hidden parameters (e.g. wrong unit or the existence of an initialization distribution), differences in implementation of model dynamics, and ambiguities in the text description of the network experiment. The very fact that adequate reproduction often cannot be achieved until a series of such causes have been tracked down and resolved is in itself disconcerting, as it reveals unreported model dependencies on specific implementation choices that either were not clear to the original authors, or that they chose not to disclose. In either case, such dependencies diminish the credibility of the model’s claims about the behavior of the target system. To demonstrate these issues, we provide a worked example of reproducing a seminal study  for which, unusually, source code was provided at time of publication. Despite this seemingly optimal starting position, reproducing the results was time consuming and frustrating. From this process, we derive a guideline of best practices that would substantially reduce the investment in reproducing such a study. We propose that these guidelines can be used by authors and reviewers to assess and improve the reproducibility of future network models.
We acknowledge the Initiative and Networking Fund of the Helmholtz Association, the Helmholtz Association through the Helmholtz Portfolio Theme”Supercomputing and Modeling for the Human Brain”, the German Research Foundation (DFG; KFO 219, TP9) and the European Union’s Horizon 2020 research and innovation programme (HBP SGA1, grant no. 720270 and no. 754304). We thank P. Quaglio, G. Trensch and R. Gutzen for fruitful discussions.
Izhikevich EM. Polychronization: Computation with spikes. Neural Computation 2006, 18, 245–282.
Robin Pauli, Tom Tetzlaff, Abigail Morrison
Jülich Research Centre, Institute for Advanced Simulation (IAS-6), Juelich, Germany
Correspondence: Robin Pauli (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P4
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) can suppress pathological oscillations and alleviate motor deficits in Parkinson’s disease. The efficacy and the extent of side effects of DBS depend critically on the positioning of the stimulation electrode. In particular, with the increased use of directional DBS, it is becoming increasingly difficult to find optimal stimulation parameters. A major challenge during the positioning of DBS electrodes is the detection of hotspots associated with the generation of pathological coherent activity. Here, we develop and test a method for localizing confined regions of coherent activity based on the local field potential (LFP) recorded with multi-contact electrodes. Our approach involves two steps, the identification of coherent sources by independent-component analysis of the multi-channel recordings in Fourier space, and the localization of identified sources by means of current-source-density analysis. We benchmark this technique for a range of source sizes and source-electrode distances based on synthetic ground-truth data generated by a simple LFP model. In this context, sources of coherent activity can be reliably localized even if the source center is not contained in the volume covered by the electrode grid. The proposed method permits a continuous tracking of source positions, and may therefore provide a tool to study the spatio-temporal organization of pathological activity in STN. Moreover, it could serve as an intra-operative guide for the positioning of DBS electrodes, and thereby improve and speed up both the implantation process and the adjustment of stimulus parameters.
Funded by the Initiative and Networking Fund of the Helmholtz Association, the German Research Foundation (DFG; KFO 219, TP9) and the European Union’s Horizon 2020 research and innovation programme (HBP SGA1, grant no. 720270).
Jyotika Bahuguna, Philipp Weidel, Abigail Morrison
Jülich Research Centre, Institute for Advanced Simulation (IAS-6), Juelich, Germany
Correspondence: Jyotika Bahuguna (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P5
The classical firing rate model of basal ganglia suggests that the “Go” pathway facilitates a movement, whereas the “No-go” pathway suppresses a movement. Strong evidence for this hypothesis was provided by the demonstration that selective optogenetic stimulation of D1-MSNs in mice leads to increased ambulation, whereas optogenetic stimulation of D2-MSNs leads to freezing . However, it has also been shown that D1- and D2-MSNs co-activate in freely moving mice during action initiation , which suggests a co-operative rather than an antagonistic role for these pathways. In order to systematically investigate the individual and interactive roles of D1- and D2-MSNs in action selection, it is necessary to be able to both record D1- and D2-MSNs in the same animal, and selectively record and manipulate the action encoding neurons. Because this is beyond present experimental techniques, we investigate this issue with the help of a hybrid spiking neuronal network/virtual robot model. The advantage of this approach is that D1- and D2-MSNs can be observed/manipulated on the single channel and population levels whilst the effect of these manipulations can be observed as the trajectories of the robot, thereby bridging the gap between striatal recordings and behavioral expression. We first demonstrate that our model can reproduce the main features of several key motor studies employing optogenetic manipulation, such as freezing, increased ambulation  and ipsilateral turning . We then test the hypothesis that D1- and D2- MSNs are competitive within a channel but cooperative on a population level. Our results show that in opposition to our original hypothesis, D1- and D2-MSNs co-operate within a channel and compete between channels. In this co-operative tandem, D1-MSNs drive the action execution while D2-MSNs suppress the competing actions. Although the co-operation between D1- and D2-MSNs within a channel is facilitated by distance dependent connectivity, an external stimulation to both populations is required in order to exhibit a concurrent activation on population level as observed in experiments . We also show that D2-D2 connectivity is crucial for the competition between the channels. Furthermore, we show that individual pairs of D1- and D2-MSNs compete or co-operate depending on the distance between their originating channels and stimulation paradigms.
This work was inspired by the debate”Direct and indirect pathways: Friends or Foes?” at IBAGS 2017. Funded by German Research Foundation (DFG; grant DI 1721/3-1 [KFO219-TP9]), the Helmholtz “Supercomputing and Modeling for the Human Brain” (SMHB) Initiative and Networking Fund of the Helmholtz Association and Ger-Jpn Comput Neurosci Project, German Federal Ministry for Education and Research (BMBF Grant 01GQ1343).
Kravitz AV, Freeze BS, Parker PR, et al. Regulation of parkinsonian motor behaviours by optogenetic control of basal ganglia circuitry. Nature 2010, 466(7306), 622–626.
Cui G, Sang Beom J, Jin X, et al. Concurrent activation of striatal direct and indirect pathways during action initiation. Nature 2013, 494(7436), 238-42.3. Tecuapetla et al., Nature Communications 2014, 5:4315.
Tecuapetla F, Matias S, Dugue GP, et al. Balanced activity in basal ganglia projection pathways is critical for contraversive movements Nature Communications 2014, 5:4315.
Nathan Lee1, Kameron Decker Harris2, Aleksandr Aravkin1
1University of Washington, Department of Applied Mathematics, Seattle, WA, United States; 2University of Washington, Department of Computer Science, Seattle, WA, United States
Correspondence: Nathan Lee (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P6
Pnevmatikakis EA, Soundry D, Gao Y, et al.: Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data. Neuron 2016, 89:285–299.
Roman Levin, Merav Stern, Eric Shea-Brown, Aleksandr Aravkin
University of Washington, Department of Applied Mathematics, Seattle, WA, United States
Correspondence: Roman Levin (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P7
Principal component analysis (PCA) is a fundamental data decomposition technique which is used to reduce the dimensionality of the data and understand the underlying structure in it. In the presence of an additional structure or features (e.g. sparse outliers), it is beneficial to use a structured decomposition method to analyze the data. A key example is robust PCA (RPCA), which separates the data into low-dimensional and sparse components; a famous use case is the background/foreground separation to separate moving objects from their surroundings.
Hermann Cuntz1, Alexander Bird2
1Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt/Main, Germany; 2Frankfurt Institute for Advanced Studies (FIAS), Computational Neuroanatomy, Frankfurt am Main, Germany
Correspondence: Hermann Cuntz (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P8
While the electrotonic properties of biophysically realistic neuronal models are most often probed with current injections in the soma or single synaptic inputs this is far from being a natural stimulus for real neurons. In this project we use analytical methods based on cable theory in combination with detailed passive and active compartmental modelling to study the responses of neurons to randomly occurring synaptic inputs in time and dendritic location. We find that under these uniform conditions dendrites behave very similarly to point neurons. The voltage responses throughout the dendritic tree average out to a constant voltage level similar to a bucket that is filled by multiple faucets. Analytically, the voltage integral over the total dendritic length is the same regardless of the location of synaptic inputs. In passive numerical simulations the individual voltage profiles then average out. The local voltage throughout the dendrite would in principle allow decoding of the percentage of synapses active at any given time, which could be very important for synaptic plasticity rules that correlate synaptic activity with the overall activity in the cell. In simple active somatic spiking models voltages are transformed into number of spikes in a manner that further allows decoding of the percentage of active synapses from the current firing rate of a neuron. Overall, while well distributed random synaptic events are also probably not a natural input to the neuron, our calculations serve as a reference point for comparison of the behaviour of neurons in more realistic biophysical neural network models.
Siwei Wang1, Idan Segev1, Stephanie Palmer2, Oren Amsalem1, Alexander Borst3
1Hebrew University of Jerusalem, Department of Neurobiology, Jerusalem, Israel; 2University of Chicago, Department of Organismal Biology and Anatomy & Department of Physics, Chicago, IL, United States; 3Max Plack Institute, Department of Neurobiology, Munich, Germany
Correspondence: Siwei Wang (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P9
Fairhall A, Shea-Brown E, Barreiro A. Information theoretic approaches to understanding circuit function. Curr Opin Neurobiol. 2012 Aug;22(4):653–9.
Muijres FT, Elzinga MJ, Melis JM, Dickinson MH. Flies evade looming targets by executing rapid visually directed banked turns. Science 2014, 11;344(6180):172–7. https://doi.org/10.1126/science.1248955.
Markram H, Muller E, Ramaswamy S, et al. Reconstruction and Simulation of Neocortical Microcircuitry. Cell 2015 8;163(2):456–92. https://doi.org/10.1016/j.cell.2015.09.029.
Rubin J, Ulanovsky N, Nelken I, Tishby N. The Representation of Prediction Error in Auditory Cortex. PLoS Comput Biol. 2016 4;12(8):e1005058. https://doi.org/10.1371/journal.pcbi.1005058. eCollection 2016 Aug.
Palmer SE, Marre O, Berry MJ, Bialek W. Predictive information in a sensory population. Proc Natl Acad Sci U S A. 2015 2;112(22):6908–13. https://doi.org/10.1073/pnas.1506855112. Epub 2015 May 18.
P10 Distinct roles of anterior cingulate cortex and basolateral amygdala in reinforcement learning under perceptual uncertainty
Alexandra Stolyarova1, Megan Peters2, Hakwan Lau1, Alicia Izquierdo1
1University of California, Los Angeles, Department of Psychology, Los Angeles, CA, United States; 2University of California, Riverside, Bioengineering, Riverside, CA, United States
Correspondence: Alexandra Stolyarova (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P10
The incremental trial-by-trial refinement of behavior can be captured by reinforcement learning  models which map stimuli to actions using reward prediction errors (RPEs). Most tasks assessing the neural underpinnings of RL have used clearly discriminable and unambiguous cues, leaving open the question of how the brain copes with perceptual uncertainty when learning by trial and error. The subjective sense of certainty, or confidence, that accompanies perceptual decisions can substitute for RPEs in the absence of external feedback  and affect neural activity in canonical RL circuits . In the current study, we trained rats to discriminate between horizontally (H)- and vertically (V)-oriented visual stimuli (sinusoidal gratings) either embedded in noise or compounded with orthogonal gratings. Animals indicated their decision based on a stimulus–response rule: H → left and V → right. Following discrimination, ratsexpressed their confidence by time wagering : they could wait a self-timed delay in anticipation of reward or initiate a new trial. In general, rats’ expressed confidence increased with accuracy and was higher for correct than error choices. Yet confidence computations overly relied on perceptual information congruent with the decision (i.e., rats waited longer when the contrast of the grating favoring the choice increased, even in the absence of performance increases), while decisions themselves weighed congruent and incongruent evidence equally, consistent with previous studies in primates [5.6]. This allowed us to identify two stimulus conditions for each animal that produced matched decision accuracy and reinforcement history, but different subjective confidence levels. Rats were then randomly assigned to a low- (LC) or high-confidence (HC) group and performed a reversal learning task, which required remapping of the stimulus–response contingency for the LC or HC stimuli, respectively. The key finding is that subjective certainty potentiated learning: reversal learning was faster in the HC group. Motivated by recent work implicating the rat anterior cingulate cortex (ACC) and basolateral amygdala (BLA) in learning under uncertainty , we chemogenetically silenced projection neurons in these regions. Inhibition of the ACC decreased metacognitive sensitivity (i.e., the trial-by-trial correspondence between accuracy and confidence; , rendering confidence reports invariant to the strength of the evidence and thereby attenuating the benefit of certainty on learning. In contrast, BLA silencing slowed reversal learning, but left confidence reports intact. Finally, we extended the standard RL model to allow confidence to directly influence value updating. Fitting this model to rat behavior revealed that only BLA inhibition decreased the learning rate. Conversely, ACC inhibition attenuated the impact of confidence on value computations and decreased the inverse temperature parameter in the decision rule that maps action values to choice probabilities, indicating a decreased reliance on the learned information. Thus, the ACC may aid in estimating the reliability of perceptual and value information to guide action selection, whereas the BLA appears to play a more general role in potentiating learning when environmental conditions significantly change.
Sutton RS, Barto AG. Reinforcement learning: An introduction (Vol. 1). MIT press Cambridge, 1998.
Guggenmos M, Wilbertz G, Hebart MN, Sterzer P. Mesolimbic confidence signals guide perceptual learning in the absence of external feedback. Elife 2016, 5, https://doi.org/10.7554/elife.13388.
Hebart MN, Schriever Y, Donner TH, Haynes JD. The Relationship between Perceptual Decision Variables and Confidence in the Human Brain. Cereb Cortex 2016, 26, 118–130, https://doi.org/10.1093/cercor/bhu181.
Lak A, Stauffer WR, Schultz W. Dopamine prediction error responses integrate subjective value from different reward dimensions. Proc Natl Acad Sci U S A 2014, 111, 2343–2348, https://doi.org/10.1073/pnas.1321596111
Zylberberg A, Barttfeld P, Sigman M. The construction of confidence in a perceptual decision. Front Integr Neurosci 2012, 6, 79, https://doi.org/10.3389/fnint.2012.00079.
Maniscalco B, Peters MA, Lau H. Heuristic use of perceptual evidence leads to dissociation between performance and metacognitive sensitivity. Atten Percept Psychophys 2016, 78, 923–937, https://doi.org/10.3758/s13414-016-1059-x
Winstanley CA, Floresco S. B. Deciphering Decision Making: Variation in Animal Models of Effort- and Uncertainty-Based Choice Reveals Distinct Neural Circuitries Underlying Core Cognitive Processes. J Neurosci 2016, 36, 12069–12079, https://doi.org/10.1523/jneurosci.1713-16.2016.
Maniscalco B, Lau H. A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratings. Conscious Cogn, 2012, 21(1), 422–430. https://doi.org/10.1016/j.concog.2011.09.021
Lukasz Kusmierz, Taro Toyoizumi, Alireza Gourdarzi
RIKEN Brain Science Institute, Neural Computation and Adaptation, Wako, Japan
Correspondence: Lukasz Kusmierz (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P11
A growing body of evidence shows that many organisms commonly exhibit Lévy flights (LFs) during their search behavior. For example, trajectories of T cells , fruit flies , wandering albatrosses , human saccades , and free word association involve power- law distributions of displacement steps, summarizing frequent nearby explorations and infrequent jumps to distant locations. Although there are multiple putative explanations as to why LFs might emerge from case specific search constraints, a general theory explaining this behavior is lacking. We show that Newton’s optimization method with noisy measurements generically leads to heavy tails of the step-size distribution. The resulting stochastic process is a LF with the tail index α = 1. Additionally, the magnitude of large jumps in our model strongly depends on the local curvature of the optimized function, with rarer jumps close to targets. This suggests that noisy Newton’s optimization method may be an efficient way of combining global random exploration with lo- cally optimal exploitation. We thus examine the circumstances under which the heavy-tailed steps can be advantageous for the search. Since search patterns of many organisms resemble those of LFs, our results suggest that they may be employing second order derivatives. We further discuss implications of our results for models of learning. Plasticity rules are often derived assuming the steepest descent method. We argue that even approximate and very noisy second order optimization should be more efficient.
Harris DT, Kranz DM. Adoptive T Cell Therapies: A Comparison of T Cell Receptors and Chimeric Antigen Receptors. Trends Pharmacol. Sci. 2016, 37(3), 220–230.
Reynolds AM, Frye MA. Free-Flight Odor Tracking in Drosophila Is Consistent with an Optimal Intermittent Scale-Free Search. PLOS One 2007
Brockmann D, Geisel T. The ecology of gaze shifts. Neurocomputing 2000, 32–33.
Viswanathan M, Childers TL. Processing of Numerical and Verbal Product Information. Journal of Consumer Psychology 1996
Daniel Zavitz1, Isaac Youngstrom2, Matt Wachowiak2, Alla Borisyuk1
1University of Utah, Department of Mathematics, Salt Lake City, UT, United States; 2University of Utah, Department of Neurobiology & Anatomy, Salt Lake City, UT, United States
Correspondence: Daniel Zavitz (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P12
In this study, we investigate a multiscale model of inter-glomerular connectivity of the mouse olfactory bulb. Each node in the network represents a glomerulus comprised of many neurons. We specify probabilistic wiring rules for outgoing connections of individual cells, based on tracing data, and study the emergent properties of the resulting network of nodes. An important parameter in the wiring rules, unknown from experiments, is connection selectivity. It is determined by the size of each node’s “target set”—the set of nodes where all outgoing connections must land. We investigate graph theoretic properties of these networks such as weighted degree distributions, clustering coefficients, centrality etc. We find that these properties differ significantly from well-studied network models (random, small-world, scale-free, etc.). Finally, we add minimal but biologically realistic nonlinear firing rate dynamics to the networks to study the effect of network structure on the processing of sensory data. Using both experimentally-derived and artificial stimuli, we find that in these networks, regardless of connection selectivity, lateral inhibition mediates the sparsening of neural code and the decorrelation of representations of similar stimuli.
Casey Diekman, Amitabha Bose
New Jersey Institute of Technology, Department of Mathematical Sciences, Newark, NJ, United States
Correspondence: Amitabha Bose (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P13
William Barnett1, Yaroslav Molkov1, Lucas Koolen2, Adrian Newman-Tancredi3, Mark Varney3, Ana Abdala2
1Georgia State University, Department of Mathematics & Statistics, Atlanta, GA, United States; 2University of Bristol, School of Physiology, Pharmacology & Neuroscience, Biomedical Sciences Faculty, Bristol, United Kingdom; 3Neurolixis Inc, Dana Point, CA, United States
Correspondence: William Barnett (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P14
In healthy humans, lung ventilation is tightly controlled to maintain physiological levels of CO2. During restful breathing, exhalation is largely passive; the lungs deflate as the diaphragm relaxes. In exercise, hypoxia or hypercapnia, active exhalation is engaged to increase lung ventilation; the abdominal and thoracic muscles contract during the final half of exhalation. This activity is quantifiable in vivo via the abdominal nerve (AbN). Active exhalation is thought to originate from late-expiratory (late-E) neurons located within the parafacial respiratory group (pFRG). However, the mechanisms by which this expiratory oscillator is recruited and interacts with the respiratory central pattern generator (CPG) are not fully understood. It has been proposed that active exhalation emerges during hypercapnia when late-E neurons receive excitatory drive from putative central chemoreceptors in the retrotrapezoid nucleus (RTN), overcoming inhibition from the respiratory CPG. The Kölliker-Fuse (KF) is thought to modulate the strength of inhibitory inputs from the respiratory CPG to late-E neurons. Both RTN and KF receive inputs from 5-HT neurons located in the medullary raphe, some of which are chemosensitive. Systemic administration of 5-HT1Areceptor (5-HT1AR) antagonist promoted irregular breathing and apneas in rodents. This effect was recapitulated by focal application of antagonist into the KF. Conversely, systemic administration of 5-HT1AR agonists ameliorated breathing irregularity and apneas in C57BL/6 and Mecp2 deficient mice, and focal administration into the KF corrected apneas in the latter. Since deficits of inhibitory input to the KF were shown to contribute to apneas in Mecp2 deficient mice, we propose that 5-HT1AR agonists inhibit KF and CPG neuron sub-populations that provide inhibitory drive to late-E neurons, disinhibiting the latter. Here, we combined experimental approach with computational simulations of the respiratory CPG to test hypotheses that 5-HT1AR activation promotes active exhalation in the absence of lung inflation feedback. For this, we determined the effects a biased, highly selective, and efficacious 5-HT1AR agonist, NLX-101 (aka F15599), on resting respiratory motor outputs of decerebrate rats under cardio-pulmonary bypass. NLX-101 increased respiratory rate in a concentration-dependent fashion, differing significantly from baseline at ≥ 0.1 μM. Notably, at [NLX-101] ≥ 0.1 μM, late-E bursts emerged in the AbN under eucapnia. Simulations of 5-HT1AR agonist-induced active exhalation, that best fitted the data, required the following testable assumptions: (1) 5-HT1AR activation inhibited KF subpopulations that drive post-inspiratory neurons in the Bötzinger complex (BC); (2) 5-HT1AR directly inhibited post-inspiratory neurons in the BC; leading to (3) disinhibition of late-E neurons and emergence of active exhalation. In summary, 5-HT1AR agonism evokes active exhalation and increases respiratory frequency in a manner resembling the hypercapnic response. The data indicates that 5-HT1AR may contribute to the emergence of active exhalation in response to hypercapnia. Our modeling results suggest that this respiratory response may be mediated by suppression of the KF and post-inspiratory neurons in the BC, disinhibiting late-E neurons. Future experimental verification of these predictions would provide a mechanistic basis for the indication of 5-HT1AR agonists to treat respiratory depression.
P15 Analyzing how Na+/K+ pump influences the robust bursting activity of half-center oscillator (HCO) models
Ronald Calabrese, Anca Doloc-Mihu
Emory University, Department of Biology, Atlanta, GA, United States
Correspondence: Anca Doloc-Mihu (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P15
Robustness in bursting activity type of central pattern generating networks (CPGs) is achieved by the coordinated regulation of many membrane and synaptic current parameters. CPG neurons depend upon a Na+/K+ pump to maintain the ionic gradients that establish the resting potential and thus support other ionic currents. The Na+/K+ pump produces an outward net current proportional to its activity. However, how the Na+/K+ pump and its current are directly involved in the mechanisms that allow multiple parameters to interact, thus producing and maintaining rhythmic single cell and network activity, is not yet fully understood.
We use a half-center oscillator (HCO) mathematical model that includes a Na+/K+ pump to replicate the rhythmic alternating bursting of mutually inhibitory interneurons of the leech heartbeat CPG under a variety of experimental conditions. This HCO model consists of a pair of reciprocally inhibitory model neurons, each represented as a single isopotential electrical compartment with Hodgkin and Huxley type intrinsic membrane and synaptic conductances. The model has eight currents with voltage-dependent conductances (including two types of inhibitory synaptic currents, spike mediated and graded) and a Na+/K+ pump current, which tracks changes in intracellular Na+ concentrations that occur as a result of the Na+ fluxes carried by ionic currents. The Na+/K+ pump exchanges two K+ ions for three Na+ ions. Its current has a sigmoidal dependence on intracellular Na+ concentrations. Na+ currents include the fast spiking current (INa) and a persistent Na+ current (IP). Both the hyperpolarization-activated cation (Ih) and leak currents have Na+ and K+ components. We build a large parametric space of this HCO model and its corresponding isolated neuron models by varying a set of 9 key parameters (the maximal conductances of the persistent Na+ (IP), slow Ca2+, leak, hyperpolarization-activated (Ih), and persistent K+ currents, across of 50, 75, 100, 125, and 150 percent of their canonical values , the leak reversal potential across − 66.25, − 62.5, − 58.75, − 55, and − 51.25 mV, the half-activation of the Na+/K+ pump across − 2, − 1, 0, 1, and 2 mV, the maximum Na+/K+ pump current across 0.38, 0.41, 0.44, 0.47, and 0.5 nA, and the slope coefficient across 90, 95, 100, 105, and 110 percent of its canonical value) in all combinations possible (a brute-force approach). Then, we systematically explored this parameter space and analyzed its 1.65 million of simulated instances each having canonical synaptic interactions. For each simulated HCO model we computed its bursting characteristics, which we recorded into a row of a SQL database table called PumpHCO-db (similar to our previous work). This study reports on the results of our ongoing investigation on how realistic activity of HCOs is affected by the Na+/K+ pump. We use this PumpHCO-db database and follow our methodology described in previous work to analyze how the Na+/K+ pump influences the robust realistic bursting activity of HCO models. We are particularly interested in parameter variations corresponding to known neuromodulations such as the modulation of Ih and maximal Na+/K+ pump current by myomodulin. Our study here is preliminary to a full investigation of the role of the Na+/K+ pump in the robust maintenance of functional bursting activity.
P16 Experimental directory structure (Exdir): An alternative to HDF5 without introducing a new file format
Svenn-Arne Dragly1, Milad Hobbi Mobarhan2, Mikkel Lepperød2, Simen Tennøe3, Gaute Einevoll4, Marianne Fyhn2, Torkel Hafting5, Anders Malthe-Sørensen1
1University of Oslo, Department of Physics, Oslo, Norway; 2University of Oslo, Department of Biosciences, Oslo, Norway; 3University of Oslo, Department of Informatics, Oslo, Norway; 4Norwegian University of Life Sciences, Faculty of Science and Technology, Aas, Norway; 5University of Oslo, Institute of Basic Medical Sciences, Oslo, Norway
Correspondence: Gaute Einevoll (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P16
There is an increased focus in the scientific community on data sharing and reproducible research. Open formats with publicly available specifications facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, certain drawbacks related to HDF5’s complex specification have initiated a discussion for an improved replacement. Here, we propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which aims to improve drawbacks associated with HDF5 while retaining its advantages. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Within the Exdir structure, data is stored using established open source data formats. While HDF5 stores data and metadata in an internal hierarchy in a single binary file, Exdir uses file system folders to represent the hierarchy, where metadata is stored in human-readable YAML files, and data is stored in the NumPy binary format. The idea of such a solution is already present in the scientific community, but no formal standard has been introduced, making it unnecessarily hard to share data and develop common tools. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. We invite the scientific community to join the development of Exdir to create an open specification that will serve as a foundation for open access to and exchange of data.
P17 A mathematical framework for modeling large scale extracellular electrodiffusion surrounding morphologically detailed neurons
Gaute Einevoll1, Geir Halnes1, Andreas Solbrå2, Aslak Wigdahl Bergersen3, Jonas van den Brink3, Anders Malthe-Sørensen2
1Norwegian University of Life Sciences, Faculty of Science and Technology, Aas, Norway; 2University of Oslo, Department of Physics, Oslo, Norway; 3Simula Research Laboratory, Fornebu, Norway
Correspondence: Geir Halnes (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P17
Many pathological conditions, such as seizures, stroke, and spreading depression, are linked to abnormal extracellular ion concentrations in the brain. Ions move due to both diffusion and electrical migration, and to investigate the role of ion-concentration dynamics under pathological conditions, one must simultaneously keep track of both the ion concentrations and the electric potential in the relevant regions of the brain. This remains challenging experimentally, which makes computational modeling an attractive tool. Previous electrodiffusive models of extracellular ion-concentration dynamics have required extensive computing power, and have therefore been limited to either phenomena on very small spatiotemporal scales (micrometers and milliseconds, see e.g. ), or to simplified and idealized 1-dimensional (1-D) transport processes on a larger scale. We have previously introduced the Kirchhoff-Nernst-Planck framework, an efficient framework for modeling electrodiffusion in 1-D [2, 3]. In this study, we introduce a 3-dimensional version of this framework. We use it to model the electrodiffusion of ions surrounding a morphologically detailed pyramidal neuron, with a focus on highlighting the intricate interplay between extracellular ion dynamics and the extracellular potential.
The simulation covered a 1 cubic millimeter cylinder of tissue for over a minute, and was performed in less than a day on a standard desktop computer, demonstrating the framework´s efficiency. We envision that this framework will be useful to elucidate mechanisms behind pathologies, such as for example spreading depression propagation. A preprint of this work is available at bioRxiv .
Pods J, Schönke J, Bastian P. Electrodiffusion models of neurons and extracellular space using the Poisson-Nernst-Planck equations–numerical simulation of the intra- and extracellular potential for an axon model. Biophysical Journal 2013 105(1), 242–54.
Halnes G, Østby I, Pettersen KH, et al. Electrodiffusive Model for Astrocytic and Neuronal Ion Concentration Dynamics. PLoS Comput. Biol 2013, 9(12):e1003386.
Halnes G, Mäki-Marttunen T, Keller D, et al. Effect of Ionic Diffusion on Extracellular Potentials in Neural Tissue. PLoS Comput. Biol 2016, 12(11):e1005193.
Solbrå A, Wigdahl Bergersen A, van den Brink J, et al. Amplification and Suppression of Distinct Brain-wide Activity Patterns by Catecholamines. bioRxiv 2018, 261107.
University of Wisconsin, Department of Neurology, Madison, WI, United States
Correspondence: Andrew Knox (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P19
Objective: NaV1.1 sodium channel mutations are a well-known cause of epilepsy syndromes, some severe (such as Dravet syndrome) and some more benign (genetic epilepsy with febrile seizures plus). The conventional wisdom is that the many anticonvulsant medications that act on sodium channels should be avoided, although this is only supported in the medical literature by a few case reports or retrospective reviews. In this study, we use a computational model to predict the effects of carbamazepine in patients Dravet syndrome secondary to truncation mutations.
Methods: A thalamocortical model described by Destexhe was modified to incorporate sodium channels with slow and fast inactivation. Truncation mutation was then simulated by reducing interneuron sodium channel conductance by 50%. Effects of carbamazepine and oxcarbazepine were then simulated by increasing the fast inactivation time of sodium channels in cortical neurons, while effects of eslicarbazepine and lamotrigine were simulated by increasing slow inactivation time.
Results: Introduction of truncation mutation into the model reduced the amplitude of sodium currents from interneurons, decreasing the number of action potentials from this population of neurons and leading to periods of prolonged bursting from pyramidal neurons akin to tonic seizures. Simulation of carbamazepine and oxcarbazepine reduced spiking rates in both populations, decreasing incidence of seizures. Simulation of eslicarbazepine and lamotrigine also decreased action potentials in both populations but did not prevent seizures.
Discussion: This study provides mechanistic evidence that sodium channel anticonvulsants can be beneficial in Dravet syndrome, although effects may be difficult to predict. This model could be validated with patients who have known sodium channel electrophysiology and clinical data documenting efficacy of sodium channel drugs. If validated, the model then could be used to predict the potential benefit of sodium channel anticonvulsants in a given patient with a known sodium channel mutation. This represents a prime application for computer modeling to aid in personalized medicine for patients with epilepsy.
Erik De Schutter, Sarah Nagasawa, Iain Hepburn, Andrew R. Gallimore
Okinawa Institute of Science and Technology, Computational Neuroscience Unit, Onna-Son, Japan
Correspondence: Erik De Schutter (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P20
AMPA receptors (AMPAR) are constitutively trafficked from the neuronal plasma membrane to the endosome, where they are sorted either to degradation at the lysosome or returned to the membrane via recycling vesicles. AMPAR trafficking is controlled by a family of proteins known as the Rab GTPases, which coordinate the sorting of AMPAR-containing vesicles through the endosomal system. The network of Rab proteins can be manipulated in response to synaptic activity, or the induction of plasticity, to increase or decrease trafficking rates, or to redirect the movement of AMPARs towards either degradation or recycling1. For example, studies in cerebellar and hippocampal cells have revealed the critical importance of Rab7 activation in the regulation of long term depression, by augmenting the Rab7-dependent degradation pathway2. Although many molecular models of AMPAR trafficking in synaptic plasticity have been developed, these have almost exclusively considered trafficking only at the plasma membrane, with the crucial subcellular trafficking pathways being neglected3. This is largely because the modeling tools for detailed spatial simulation of vesicular and endosomal trafficking have not been available. Although spatial modeling has advanced in recent years, with voxel-based molecular simulators such as STEPS (steps.sourceforge.net) incorporating spatial effects—diffusion and probabilistic interactions between molecules within realistic neuronal mesh structures 4—there has been no explicit account of molecule size or excluded volume effects. This approach has the advantage of computational performance and accuracy for small molecules and ions, but these simplifying assumptions break down for complex structures of large size such as vesicles and a new modeling approach is required. We have developed Vesicle objects within STEPS as spherical structures of user-defined size, which occupy a unique excluded volume and sweep a path through the tetrahedral mesh as they diffuse throughout the cytosol. Hybrid modeling allows us to retain normal reaction–diffusion mechanics in the system for other biochemical species, such as kinases, receptors, and calcium. The incorporation of phenomena such as endocytosis, exocytosis, and the fusion and budding of vesicles to and from intracellular membranes allow us to simulate the complete AMPAR vesicular cycle. Our preliminary models using this vesicle modeling technology have been successful in replicating recent experimental studies revealing the essential role of specific Rab proteins in the expression of long term depression at the parallel fiber-Purkinje cell synapse. It is expected that this new methodology will enable us to model synaptic plasticity and other subcellular processes at levels of detail that have, until now, remained beyond the reach of modeling technologies. We envisage that this will open up entirely new avenues of modeling research in all areas of neuroscience and cell biology in which the regulation of protein trafficking plays a role.
Fernandez-Monreal M, Brown TC, Royo M, Esteban JA. The Balance between Receptor Recycling and Trafficking toward Lysosomes Determines Synaptic Strength during Long-Term Depression.J. of Neurosci. 2012, 32, 13200–13205.
Kim T, Yamamoto Y, Tanaka-Yamamoto K. Timely regulated sorting from early to late endosomes is required to maintain cerebellar long-term depression. Nat. Comm.2017, 8, 16.
Gallimore AR, Kim T, Tanaka-Yamamoto K, De Schutter E. Switching On Depression and Potentiation in the Cerebellum. Cell Rep 2018,.22, 722–733.
Hepburn I, Chen W, Wils S, De Schutter E. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies.BMC Syst. Biol. 2012, 6, 36.
Taekjun Kim, Wyeth Bair, Anitha Pasupathy
University of Washington, Department of Biological Structure, Seattle, WA, United States
Correspondence: Taekjun Kim (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P21
Visual texture-the structure of a surface which underlies the perception of roughness or smoothness, fineness or coarseness-is thought to be processed along the ventral visual pathway in the primate. In most past studies, texture was typically defined as a spatially homogeneous pattern composed of separated elements such as lines or forms. The neural correlates of texture perception were tested by comparing responses to arrays of oriented line segments with and without the presence of differently oriented line segments in the surround. A simple, low-level mechanism (e.g., orientation-tuned suppression) might be sufficient to explain this discrimination. More recently, researchers have probed the neural representation of more naturalistic texture images. These studies demonstrate that, while V1 responses to texture can be explained on the basis of local orientation and spatial frequency information, responses in V2 and V4 require the inclusion of higher order summary statistics, for e.g. correlations between spatially neighboring filters, correlations between filters with neighboring orientations, etc. However, because of the high dimensionality of these statistics, it is still unclear how these statistics relate to the perceptual quality of texture. Specifically, we still do not know whether there are neurons in the brain that encode the perceptual qualities of smoothness, roughness, fineness, etc. In this study, we focus on four basic texture dimensions, which have been suggested to be crucial for human visual texture perception: Coarseness, Directionality, Regularity, and Contrast. We devised simple statistics to quantify the degrees of the attributes in a given texture image, and then examined whether responses of neurons in macaque area V4 to a variety of natural texture images could be described by selectivity for these perceptually relevant texture features. Our results indicate that many V4 neurons (about 30% of total recorded units) have strong texture selectivity for one or more of the four basic texture features. Textures classified based on neural population activity were in strong agreement with human perception. Interestingly, when we tested neural representation of shape information (e.g., curvature of object boundary) in the same neural population, neurons with strong texture selectivity were rarely overlapped with those having strong shape selectivity (about 40% of total recorded units). These experimental findings suggest that texture and shape encodings are provided by different population of V4 neurons and that texture selective V4 neurons extract key psychophysical measures of texture by computing simple summary statistics.
Tomoyuki Namima, Anitha Pasupathy
University of Washington, Department of Biological Structure, Seattle, WA, United States
Correspondence: Tomoyuki Namima (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P22
Occlusions, which are everywhere in natural scenes, make object recognition a challenging problem. Primate inferior temporal (IT) cortex, the final stage of form processing along the ventral visual pathway, is likely important (Kourtzi and Kanwisher 2001; Lerner et al. 2002; Hegde et al. 2008; Kovacs et al. 1995) but the specific role of IT neurons in representing and recognizing occluded objects is largely unknown. In present study, we examined how IT neurons encode information about occluding and occluded objects and how these signals might subserve shape discrimination. Monkeys were trained to report whether two stimuli presented in sequence were the same or different. The first stimulus in the sequence was unoccluded, while the second was partially occluded with a set of randomly positioned dots of variable diameter. As animals performed this sequential shape discrimination task, we recorded single-unit responses in IT cortex. We found that the responses of IT neurons were predominantly modulated by two factors—the shape of the occluded object and the total area of the occluding dots. Consistent with Kovacs et al. (1995), we found that many IT neurons maintained their shape preference under occlusion. But to our surprise, some neurons responded best to the occluded stimuli while others responded best to the unoccluded stimulus. For some neurons shape selectivity also increased under occlusion. Overall the color of the stimuli and the shape of the occluders played a minimal role in dictating the responses of IT neurons. Our simulation results suggest that IT responses can be modeled on the basis of two signals—one that reflects the shape of the occluded stimulus and a second that reflects the area of the occluding dots. Multiplicative modulation of the signal that reflects the shape of the occluded stimulus by the occluder area followed by an additive modulation by the occluder area can recapitulate the responses and shape selectivity of ITneurons that respond best to occluded and unoccluded stimuli. Thus our results imply that, under the partial occlusion, shape selectivity of some IT neurons seems to be enhanced by taking advantage of signal about occluders and those IT neurons might be involved in stable object perception under the partial occlusion.
Catherine Davey, David Grayden, Anthony Burkitt
University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia
Correspondence: Catherine Davey (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P23
Neural plasticity describes the process by which synaptic weights change in response to inputs and is a primary mechanism by which the brain learns. Learning begins prior to birth, with most mammals being born with some functionality in hearing, movement and vision. Linsker’s [1, 2, 3] seminal three-part paper series provided a compelling model of how learning can occur due to spontaneous activity in the absence of environmental input. He showed that structure in synaptic connection densities can evoke temporal correlation in neural activity that, through Hebbian plasticity, induces the emergence of spatial opponent cells in early layers of cortical processing.
While Linsker considered the spatial aspect of synaptic connectivity distributions, the spike propagation delay was assumed to be uniform among all neurons in a lamina, and hence had negligible impact. We address here the question of how spike propagation delay, due to the time taken for an action potential traverses along the axon from a presynaptic neuron to a postsynaptic neuron, affects the resulting pattern of synaptic connectivity. For myelinated axons, propagation delay is primarily a function of distance and axon diameter. Given the importance of motion perception in everyday life, an understanding of the impact of temporal delays in visual processing and its resulting effect upon subsequent neural learning is an important goal that the current work seeks to address. A three-layer, feed-forward network of Poisson neurons with Gaussian synaptic connection densities is used, as in Linsker’s analysis . An expression for covariance between neurons that incorporates both distance-dependent propagation delay and an arbitrary post-synaptic potential (PSP) function is derived. We show that adding temporal delay destroys the structure of the lag-zero covariance and thus inhibits the development of simple cells, which is incongruent with the way in which neural systems are expected to behave. A more plausible simulation would model a presynaptic neuron as impacting a postsynaptic neuron over a finite time. This highlights the importance of the time course of the PSP function. We show the role that the duration of the PSP plays in determining the resulting network structure. We further calculate receptive field size as a function of delay, homeostatic equilibrium, and synaptic connection parameters. The results show the conditions under which the spatial resolution of the developing spatial opponent cells is optimised, and find that these conditions accord with experimental observations.
This research was supported under the Australian Research Council Discovery Projects funding scheme (project number DP140102947).
Linsker R. From basic network principles to neural architecture: Emergence of spatial-opponent cells. Proceedings of the National Academy of Sciences of the United States of America 1986, 83, 7508–7512.
Linsker R. From Basic Network Principles to Neural Architecture: Emergence of Orientation-Selective Cells. Proceedings of the National Academy of Sciences of the United States of America 1986, 83, 8390–8394.
Linsker, R. From Basic Network Principles to Neural Architecture: Emergence of Orientation Columns. Proceedings of the National Academy of Sciences of the United States of America 1986, 83, 8779–8783.
Yanbo Lian1, Hamish Meffin2, David Grayden1, Tatiana Kameneva3, Anthony Burkitt1
1University of Melbourne, Department of Biomedical Engineering, Melbourne, Australia; 2National Vision Research Institute, Carlton, Australia; 3University of Melbourne, Electrical and Electronic Engineering, Parkville, Vic, Australia
Correspondence: Yanbo Lian (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P24
Sparse coding (or efficient coding) is successful in generating Gabor-like features using natural input images, which suggests that the visual system employs a small number of neurons to represent visual stimuli. Models based on efficient coding have been proposed to account for some physiological phenomena in the primary visual cortex (V1), such as diverse shapes of V1 simple cell receptive fields and visual non-classical receptive field effects (such as end-stopping effects). Though some models based on efficient coding were built from the perspective of biological plausibility, they did not respect some biological constraints, such as Dale’s law (the sign of synaptic connections cannot change through learning) and local learning. In addition, phase-reversed cortico-thalamic feedback, a phenomenon observed in cat cortex, cannot be explained by current biologically plausible models. In this study, we propose a two-layer model of visual pathways from the lateral geniculate nucleus (LGN) to V1 based on efficient coding using rate-based neurons. The first layer has separate channels for on-centered and off-centered LGN cells and the second layer represents V1 simple cells. There are feedforward and feedback connections between two layers and they are initially different. Both feedforward and feedback connections consist of excitatory connections and separate inhibitory connections. The learning rule of updating connections between LGN and V1 is local because it only depends on the pre-synaptic and post-synaptic firing rates. The sign of excitatory or inhibitory connections is not allowed to change during learning. 12-pixel by 12-pixel image patches sampled from ten 512-pixel by 512-pixel pre-whitened natural images are used as the input stimuli to the model. In our simulation, the learning rule is applied after every 100 input patches are displayed to accelerate the learning process. Simulations demonstrate several interesting points. First, our model can explain the emergence of diverse shapes of receptive fields of V1 simple cells: Gabor-like receptive fields and a large percentage of blob-like receptive fields. Second, phase-reversed cortico-thalamic feedback naturally emerges because of the structure of learned connections when natural images are used as input stimuli to train the model. Third, feedforward and feedback connections tend to be identical during learning. Fourth, the overall strength of inhibitory connections between LGN and V1 can significantly alter the connection structure and shape the receptive fields of V1 simple cells. Our model of implement efficient coding incorporates many biological facts such as Dale’s law, non-negative firing rates, local learning rule and the existence of cortico-thalamic feedback. The results suggest that efficient coding can be realised using simple neural circuits and explain important physiological properties of V1.
P25 Building and simulating a biophysically detailed network model of the mouse primary visual cortex
Yazan Billeh, Sergey Gratiy, Kael Dai, Ramakrishnan Iyer, Nathan Gouwens, Stefan Mihalas, Christof Koch, Anton Arkhipov
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Yazan Billeh (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P25
Rapid advancement in neuroscientific tools has yielded an extraordinary amount of data regarding the structural and dynamical properties of cortical circuits. In parallel, there has been vast progress in parallel computing and software to allow for unprecedented simulation capabilities. Herein we describe our efforts in combing these two exciting advances to develop, in a data-driven manner, a model of the mouse primary visual cortex (area V1) comprising ~ 230,000 neurons from all cortical layers. For developing our cortical model, we used the Brain Modeling ToolKit (BMTK): a python API developed by the Allen Institute (github.com/AllenInstitute/bmtk). BMTK allowed us to construct our network and integrate seamlessly with NEURON [Hines and Carnevale 1997] to allow for parallel simulations. Approximately 51,000 cells are biophysically detailed, pooled from > 100 models of individual neurons from the Allen Cell Types database (celltypes.brain-map.org). The network receives spike-train inputs from filter models representing a variety of functional cell types from the Lateral Geniculate Nucleus (LGN) of the thalamus. The LGN filter models were based on spatiotemporal fits from experimental recordings in vivo [Iyer et. al, in preparation]. The projection architecture from the LGN to the visual cortex neurons was based on experimental literature [Lein & Scanziani 2018]. Purely feedforward simulations showed that the origin of direction selective responses observed in certain cortical cell-types in our model is dependent on the thalamocortical topology. Moreover, experimental measurements were used to fit the excitatory post-synaptic current magnitude that V1 neurons receive in response to grating stimuli.
After optimizing the LGN input to the column, the recurrent connectivity between cell-types and layers was introduced. The probability of connections, strength of connections (unitary PSP), functional connectivity rules, and synaptic placement between all cell-types was obtained via a thorough literature search, resulting in a knowledge graph that combines the connectivity information with the records of literature sources; assumptions were used where data was not available. As the next critical step, the synaptic weights were optimized to produce irregular network activity in response to visual stimulation. We will describe the construction and simulations of the V1 model and discuss how available or hypothesized information about properties of cell types, feedforward connectivity from LGN, and recurrent connectivity has resulted in certain functional properties—such as, for example, orientation and direction selectivity. We will also discuss the plans to utilize the developed model to unravel the role of certain cell-types and connections in generating patterns of neuronal activity and computations in the cortex. The model represents a milestone in the development of data-driven simulations of brain activity and computations in vivo based on extensive characterization of the brain structure in vitro and should provide a valuable resource for the computational neuroscience community, in conjunction with the standardized model construction and simulation interfaces of the Brain Modeling ToolKit.
Hines ML, Carnevale NT. The NEURON simulation environment. Neural Comput. 1997, 15, 9(6), 1179–1209.
Lien AD, Scanziani M. Cortical direction selectivity emerges at convergence of thalamic synapses. Nature 2018, 558, 80–86.
Jessica Helms1, Xandre Clementsmith1, Sorinel Oprisan1, Tams Tompa2, Antonieta Lavin3
1College of Charleston, Department of Physics and Astronomy, Charleston, SC, United States; 2University of Miskolc, Miskolc, Hungary; 3Medical University of South Carolina, Charleston, SC, United States
Correspondence: Sorinel Oprisan (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P26
Electrophysiolgy and pharmacology tools have been widely used for exploring and controlling the activity of neural networks. While electrophysiology offers a good temporal resolution and pharmacology allows very narrow targeting of specific cells, they both have a poor spatial resolution. Recently, optogenetic pushed the limits of spatial resolution and accuracy to the level of single cell. Light-induced (optogenetic) control of neuronal activity utilize light-activated photosensitive proteins (microbial opsins), such as channelrhodopsins to switch on/off ionic channels. We carried out a series of optogenetic experiments on male PV-Cre mice infected with a viral vector hChR2(H134R) delivered to the mPFC. Channelrhodopsins hChR2 is adapted for mammalian expression with the H134R mutation that produces a larger and slower photocurrent than wild-type hChR2. In our experiments, the optical stimulation was delivered in vivo by a 473 nm laser and the local filed potential (LFP) was sampled at 10 kHz. We carried out two previous studies, i.e. the control and a cocaine study, whereas here we investigated the effect of D1 receptors antagonist SCH23390 and D2 antagonist sulpiride. Using the delay embedding method, we identified a low-dimensional attractor and unfold its phase space trajectory. The main reason we focus on these two dopamine antagonists is because we want to quantify their ability to bring the neural activity changed by cocaine back to its control (no cocaine) range. As in the previous studies, the mPFC response to a brief 10 ms light pulse was recorded for 2 s. During data post-processing, the first 0.5 s were discarded to remove the transient response of the neural network. We performed a nonlinear time series analysis of LFPs recorded from PV+ neurons in the mPFC using time reversal asymmetry and false nearest neighbor (FNN) statistics between the original signal and surrogate data to identify the nonlinearity in the data set. Delay-embedding method used one-dimensional data (time series) of the membrane potential to unfold the true high-dimensional phase space dynamics. As in the previous study, we used both (1) the autocorrelation and (2) the average mutual information for estimating the lag time. The embedding dimension was determined using the false nearest neighbor method.
Rodrigo F. O. Pena, Vinícius Cordeiro, Cesar C. Ceballos, Antônio C. Roque
University of São Paulo, Department of Physics, Ribeirão Preto, Brazil
Correspondence: Rodrigo F. O. Pena (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P27
The subthreshold resonance properties of neurons are usually measured by submitting a neuron to the so-called ZAP function and constructing the impedance amplitude profile as the ratio of Fourier transforms of output and input: Z(f) = FFTout/FFTin [1, 2]. The resonance frequency corresponds to a peak in Z(f). In general, for low amplitude (~ 10 pA) ZAP functions the voltage response oscillations are symmetric about a reference voltage line. However, there is evidence of asymmetric responses to ZAP functions, with non-coincident depolarizing and hyperpolarizing membrane resonance frequencies . Here we study this effect for high amplitude ZAP functions (> 10 pA). We propose two different measures than the usual Z(f). We take the holding membrane potential (Vhold) as reference voltage line (voltages above/below it are positive/negative) and, for each frequency, measure the magnitudes of the maximum and minimum voltages normalized by the ZAP amplitude. These will be called Z + (f) and Z-(f).We studied Z + (f) and Z-(f) for a neuron model [4, 5] submitted to a ZAP function. For low ZAP amplitudes, Z + (f) and Z-(f) are identical but for high ZAP amplitudes, Z + (f) and Z-(f) have different resonance frequencies. We characterized the differences between magnitudes ΔZ = Z + (f +)−Z-(f-) and resonance frequencies Δf = f+− f–in the two-dimensional diagram spanned by Vhold and the time constant of the hyperpolarization-activated currentIh. There are regions in the diagram where the neuron can discriminate the frequency change of the input current based on its voltage response profile. This suggests that a neuron can be sensitive to changes in the frequency of its synaptic inputs, and this sensitivity depends on intrinsic parameters of its ionic currents. Our theoretical results reproduce a phenomenon which has been observed experimentally  suggesting that the quantities Z + (f) and Z-(f) as defined here can be useful in further studies of resonance phenomena in neurons.
This work was produced as part of the activities of FAPESP Research, Disseminations and Innovation Center for Neuromathematics Grant 2013/07699-0. VLC and RFOP are recipients of the respective FAPESP scholarships: 2017/05874-0 and 2013/25667-8. CCC is supported by a CAPES PhD scholarship. ACR is partially supported by the CNPq fellowship Grant 306251/2014-0. RFOP and ACR are also part of the IRTG 1740/TRP 2015/50122-0, funded by DFG/FAPESP.
Hutcheon B, Yarom Y. Resonance, oscillation and the intrinsic frequency preferences of neurons. Trends Neurosci. 2000, 5 216–222.
Rotstein HG, Farzan N. Frequency preference in two-dimensional neural models: a linear analysis of the interaction between resonant and amplifying currents. J Comput Neurosci. 2014, 37, 9–28.
Fischer L, Leibold C, Felmy F. Resonance properties in auditory brainstem neurons. Front Cell Neurosci. 2018, 12, 8.
Pena RFO, Ceballos CC, Lima V, Roque AC. Interplay of activation kinetics and the derivative conductance determines the resonance properties of neurons. arXiv preprint 2017, arXiv:1712.00306.
Pospischil M, Toledo-Rodriguez M, Monier C, et al. Minimal Hodgkin–Huxley type models for different classes of cortical and thalamic neurons. Biol Cybern. 2008, 99, 427–441.
P28 Implementation of the Potjans-Diesmann cortical microcircuit model in NetPyNE/NEURON with rescaling option
Cecilia Romaro1, Fernando Najman2, Salvador Dura-Bernal3, Antônio C. Roque1
1University of São Paulo, Department of Physics, Ribeirão Preto, Brazil; 2University of São Paulo, Math and Statistics Department, São Paulo, Brazil; 3SUNY Downstate Medical Center, Department of Physiology and Pharmacology, Brooklyn, NY, United States
Correspondence: Cecilia Romaro (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P28
The Potjans-Diesmann (PD) model  reproduces the cortical network under a 1 mm2 surface area of early sensory cortex in 1x1 scale. The network consists of around 80,000 leaky integrate-and-fire (LIF) neurons divided in eight cell populations representing excitatory and inhibitory neurons in cortical layers 2/3, 4, 5 and 6. External input is provided by thalamic and cortico-cortical afferents. The model generates spontaneous activity with layer-specific average firing rates and synchrony and irregularity features similar to the ones observed experimentally, and allows a study of the propagation of thalamic inputs from layers 4 and 6 through all layers. The network, originally built in NEST , specifies fixed numbers of excitatory and inhibitory neurons per layer, the number of connections between these neuronal populations and the number of external inputs to each cell population. These numbers are based on experimental data. In this work, we converted the PD model with rescaling option from NEST to NetPyNE (www.netpyne.org) , a high-level interface to the NEURON simulator  that facilitates the development, parallel simulation and analysis of biological neuronal networks. The rescaling option for the PD model, not addressed in the original article, but included in the source code available at the Open Source Brain (OSB) platform , which generates layer-specific average firing rates within the margins of error determined in the original article. The rescaling implemented in the NetPyNE version depends on a single parameter in the interval [0, 1], which is used to resize the numbers of network neurons, connections and external inputs as well as the synaptic weights while keeping the matrix of connection probabilities and the proportions of cells per population fixed. The NetPyNE implementation, which employs parallel NEURON as its backend simulator, opens the possibility of constructing network models with the PD model connection topology but using compartmental conductance-based neuron models instead of LIF neurons. This allows a new array of possible studies, such as investigating the interaction between network topology and dendritic morphology or channel-specific parameters. Additionally, NetPyNE employs a high-level declarative format that clearly separates the model parameters from the underlying implementation, making the PD model easier to understand and manipulate. NetPyNE enables efficient parallel simulation of the model with a single function call and provides a wide array of built-in analysis functions to further explore the model.
This work was produced as part of the activities of FAPESP Research, Disseminations and Innovation Center for Neuromathematics (Grant 2013/07699-0, S. Paulo Research Foundation).
Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cerebral Cortex 2014, 24, 785–806.
Gewaltig MO, Diesmann M. NEST:NEural Simulation Tool. Scholarpedia 2007, 2, 1430.
Lytton WW, Seidenstein H A, Dura-Bernal S, et al. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON. Neural Computation 2016, 28, 10, 2063–2090.
Carnevale NT, Hines ML. The NEURON Book. 2006, Cambridge, UK: Cambridge University Press.
P29 Effects of spike frequency adaptation on dynamics of a multi-layered cortical network with heterogeneous neuron types
Renan O. Shimoura, Nilton Liuji Kamiji, Rodrigo F. O. Pena, Vinícius Cordeiro, Antônio C. Roque
University of São Paulo, Department of Physics, Ribeirão Preto, Brazil
Correspondence: Renan O. Shimoura (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P29
The cerebral cortex displays a rich repertoire of internally-generated dynamic states. Different rhythmic activity can be generated by mechanisms at network level (e.g. recurrent excitation-inhibition loops) and at neuronal level (e.g. spike frequency adaptation, SFA). Several processes can influence SFA and one of them is related to application of acetylcholine, which decreases SFA of neocortical neurons . Theoretical studies of cortical activity under SFA effects on population dynamics are based on artificial architectures built from random networks. In spite of the usefulness of these models, it is important to have computational models that try to accurately represent cortical architecture. Recently, Potjans and Diesmann (PD)  introduced a multi-layered network model of the cortical microcircuit based on experimental data of mammal neocortex. All neurons of the PD model are described by the same leaky integrate-and-fire (LIF) neuron model. Here we study how the dynamic properties of the model change when the excitatory and the inhibitory neurons are different and described by the adaptive exponential integrate-and-fire (AdEx) model. Neuronal parameters are tuned so that excitatory neurons are of the regular spiking (RS) type and inhibitory neurons are of the fast spiking (FS) type. SFA can be implemented in RS neurons by the change of a single parameter. We will call this the heterogeneous PD model (hPD). Initially, we characterized the spontaneous activity patterns generated in the hPD model by varying the excitation-inhibition balance and the firing rate of the Poissonian background input. Then, we repeated the characterization study for different SFA levels of RS neurons. In general, the hPD model with SFA displayed lower layer-specific average firing rates than the hPD model without SFA. The hPD model with SFA also had mean population spike frequencies closer to experimental data for the awake state. Additionally, we found regions in the parameter space displaying intermittent network oscillations. We observed the emergence of high frequency oscillations in the beta-gamma bands by decreasing SFA, in similar fashion to what has been observed when acetylcholine is released in the visual cortex . In conclusion, the PD model with heterogeneous neuron types provides a good in silico framework to study complex network activity behavior and modulatory effects due to spike frequency adaptation.
This work is part of the activities of FAPESP Research, Innovation and Dissemination Center for Neuromathematics (Grant 2013/07699-0, S. Paulo Research Foundation).ROS, RFOP and VL are recipients of the respective FAPESP scholarships: 2017/07688-9, 2013/25667-8 and 2017/05874-0. NLK is supported by FAPESP Grant 2016/03855-5. ACR is partially supported by the CNPq fellowship Grant 306251/2014-0. RP and ACR are also part of the IRTG 1740/TRP 2015/50122-0, funded by DFG/FAPESP.
Tang A, Bartels AM, Sejnowski TJ. Effects of cholinergic modulation on responses of neocortical neurons to fluctuating input. Cereb Cortex 1997, 7, 502–509.
Potjans TC, Diesmann M. The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cereb Cortex 2014, 24, 785–806.
Rodriguez R, Kallenbach U, Singer W, et al.Short- and long-term effects of cholinergic modulation on gamma oscillations and response synchronization in the visual cortex.J Neurosci 2004; 24, 10369–78.
Central Michigan University, Engineering and Technology, Mt Pleasant, MI, United States
Correspondence: Anu Aggarwal (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P30
Purves D, Augustine GJ, Fitzpatrick D, et al. White, Neuroscience., 5th ed., Sinauer Associates, MA, USA, 2012.
McNaughton BL, Battaglia FP, Jensen O, et al. Path integration and the neural basis of the ‘cognitive map’, Nat. Rev. Neurosci. 2006, 7, 663–678.
Hasselmo ME. How We Remember, MIT Press, Cambridge, MA, 2012.
O’Keefe J, Dostrovsky J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely moving rat, Brain Res.1971, 34 (1), 171–175.
Taube JS, Muller RU, Ranck. Jr JB. Head direction cells recorded from the post-subiculum in freely moving rats. I. Description and quantitative analysis, J. Neurosci. 1990, 10, 420–435.
Taube JS, Muller RU, Ranck. Jr JB. Head direction cells recorded from the post-subiculum in freely moving rats. II. Effects of environmental manipulations, J. Neurosci. 1990, 10, 436–447.
Hafting T, Fyhn M, Molden S, et al. Microstructure of a spatial map in the entorhinal cortex, Nature 2005, 436, 801–806.
Mhatre H, Gorchetchnikov A, Grossberg S. Grid cell hexagonal patterns formed by fast self-organized learning within entorhinal cortex, Hippocampus 2012, 22, 320–334.
P31 Understanding action potential evolution in axon due to focal geometric deformation using a hybrid 1D-3D model
Yuan-Ting Wu, Ashfaq Adnan
University of Texas Arlington, Mechanical and Aerospace Engineering, Arlington, TX, United States
Correspondence: Yuan-Ting Wu (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P31
Localized deformation on axon is observed in many scenarios, such as traumatic brain injury, Alzheimer’s disease (AD), or multiple sclerosis (MS). Those observations open up the question—how much deformation can block or change the action potential transport? Specifically, any deviation of the near-cylindrical axonal cross section due to cell–cell contact can lead to changes in action potential. However, predicting the answer to such question is challenging. The major challenge here is the length and time scale. The characteristic length scale of a human axon spike (non-myelinated) is around 10 mm (spike to spike), but the focal geometry change can be 1—10 µm. To simulate the shape better with numerical method, the discretized size can go as small as 0.1 µm when needed. It requires roughly ~ 10,0003 number of data points for a 3-D model, or ~ 10,0002 points for a 2-D model. The other issue is the nature of the action potential. Since axon potential is a mutated wave function which is numerically unstable when solved using explicit method. For the size of 0.1 µm mesh size, a time step of 10–6 µs is needed for an explicit method, or ~ 1 µs time step for an implicit method (less overall computational resource). To solve the dilemma, we proposed a hybrid 1D-3D model for it. The model we proposed consists of two parts: 1. An one-dimensional cable theory model with Hodgkin–Huxley membrane capacitor simulating the cylindrical of before and after the deformation site. 2. A 2-D meshed finite element method (FEM) model for the deformed part and its neighbor. The 2-D model currently uses cylindrical coordinate, and it can be updated with a 3-D model in the future. The 2-D model uses the Laplace equation at the intracellular medium and the Hodgkin–Huxley capacitor at the membrane. Those two integrated models interact with each other at each time step to ensure that the simulated condition in the FEM part is representing its behavior in a long axon.
Zhaojie Yao, Azadeh Yazdan-Shahmorad
University of Washington, Departments of Bioengineering & Electrical Engineering, Seattle, WA, United States
Correspondence: Zhaojie Yao (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P32
Uzdensky AB. Photothrombotic Stroke as a Model of Ischemic Stroke. Translational stroke research 2017, 1–15.
McLean JW, Freeman JD, Walker RE. Beam spread function with time dispersion. Applied optics, 1998. 37(21), 4701.
Medorian Gheorghiu1, Jonathan Withlock2, Raul Muresan3, Bartul Mimica2
1Transylvanian Institute of Neuroscience, Cluj, Romania; 2Norwegian University of Science and Technology (NTNU), Kavli Institute for Systems Neuroscience, Trondheim, Norway; 3Romanian Institute of Science and Technology, Center for Cognitive and Neural Studies, Cluj-Napoca, Romania
Correspondence: Medorian Gheorghiu (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P33
The patterns of neuronal activation during planning and decision-making in freely behaving rats is little understood, partly because the cortical areas responsible for behavior integrate a multitude of sensory and motor information. Also, with freely behaving animals, the trials do not have precise length and structure, the number of events is dynamic and they depend on the decisions the animals make while exploring the environment. Here, we use a visualization technique based on color sequences  to investigate the expression of multi-neuron firing patterns across the posterior parietal cortex (PPC) and frontal motor cortex (AGm) that are specific to various behavioral states.
Experimental design and behavioral paradigms: We implemented an instructed task where the rat runs to a “Home” well with fixed location during a trial, then a free-choice exploratory task where the rat searches for a “Target” well located randomly across an arena with 36 wells. A custom made NeuroNexus micro-drive was implanted in the rat’s brain targeting PPC and AGm simultaneously (8 tetrodes in each area). Data was high-pass filtered using non-causal Gaussian 300 Hz and spikes were detected using an amplitude threshold set at four standard deviations. Spikes were sorted and the best responding cells (four from PPC and seven from AGm) were selected based on their responses during the experimental task.
Conclusions: Using the visualization technique, we can extract synchronous pattern of spikes (Fig 1A) or firing-rate patterns (Fig 1F). The synchronous patterns of spikes were not clearly detected for small τ (Fig 1A, τ = 20 ms). For large τ (Fig 1F, τ = 250 ms) a rate covariation was visible for approx. 3 s after the rats started to lick in the “Home” well, where a specific firing pattern was visible (Fig 1H). This was reflected in the color sequence as a greenish color. A different combination of cells’ firing was visible (Fig 1J) as a purple pattern immediately after the rat left the location of the “Home” well. Our results suggest that behavioral events are correlated to specific and coordinated firing patterns across PPC and AGm. These patterns evolve on relatively slow time scales (> 200 ms). A further investigation is required involving more cells to determine if joint-spikes events are present at small time scales.
Withlock J, Sutherland RL, Witter MP, et al. Navigating from hippocampus to parietal cortex. -PNAS 2008, 105(39), 14755–14762
Pesaran B, Nelson MJ, Andersen RA. Free choice activates a decision circuit between frontal and parietal cortex. Nature 2008, 453, 406–409
Ovidiu Jurjut, et al. J Neurophysiol. 2009
P34 Mean field theory of large and sparse recurrent networks of spiking neurons including temporal correlations of spike-trains
Sebastian Vellmer1, Benjamin Lindner2
1Bernstein Center for Computational Neuroscience, Complex Systems and Neurophysics, Berlin, Germany; 2Humboldt University Berlin, Physics Department, Berlin, Germany
Correspondence: Sebastian Vellmer (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P34
P35 Probabilistic analysis of high-dimensional stochastic firing rate models: Bridging neural network models and firing rate models
Ehsan Mirzakhalili, Bogdan Epureanu
University of Michigan, Department of Mechanical Engineering, Ann Arbor, MI, United States
Correspondence: Ehsan Mirzakhalili (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P35
Advances in the characterization of neurons and the increase in computational capacity has enabled researchers to build larger and more detailed models of neural networks. While such models have proven to be helpful, the interpretation of results obtained from such models is not straightforward due the lack of necessary analytical and mathematical tools. Hence, a framework that enables rigorous analysis of detailed network models is invaluable. To establish our proposed framework, we start with a network that can resemble working memory. The duration of the recall and the average firing rate during the recall are used to quantify the characteristics of such network models. The mechanisms that can affect these metrics can be studied by varying different parameters of the model one by one. However, such analysis cumbersome in large detailed networks. Alternatively, rate models can be constructed that can faithfully represent key dynamics of detailed network models especially if noise is incorporated in such rate models. Rate models are attractive not only because they are computationally efficient, but because they can be analyzed based on a rich mathematical foundation of dynamical systems. Hence, the effect of parameters of the model on the working presence of working memory can be studied by examining of the bifurcation diagram of rate models that correspond to such network models. Such deterministic bifurcation analyses can only show the existence of multiple stable or unstable solutions for a firing rate model, which is not enough to describe dynamics of network models. However, adding noise to rate models enables establishing the connection between rate models and network models by allowing calculation of metrics such escape time and probability of finding the system in each point on the bifurcation diagram. Calculation of such metrics and the effects of noise on the bifurcation analysis of such dynamical systems and have not been investigated previously for analysis of dynamics of neural networks. In this research, we introduce a probabilistic framework that contains a stochastic bifurcation analysis of rate models in the presence of noise. We focus first on models that consist of an excitatory population and an inhibitory population. Stochastic differential equations are formulated for firing rate models by considering the states to be large, but the noise to be comparatively small. Hence, a linearization of the firing rate function with respect only to noise can be accurate. Next, the system of stochastic differential equations is converted to a Fokker–Planck equation. The stationary solution of the Fokker–Planck partial differential equation reveals the probability of finding the system at a certain firing rate. We solve the Fokker–Planck equation numerically to find such stationary solutions at various parameter values, hence building stochastic bifurcation diagrams. The results obtained from the stationary solutions of the Fokker–Planck equation shows how noise can change the probability of finding the system in each of the solutions in the bifurcation diagram. The results show that the same magnitude of noise can affect each stable solution differently. Therefore, evaluating the probability distribution of solutions to rate models can increase the capability of these models to analyze network population activity obtained from experiments or numerical models.
Nikita Novikov1, Boris Gutkin2
1St.Petersburg School of Economics, Higher School of Economics, Moscow, Russian Federation; 2École Normale Supérieure, Paris, France
Correspondence: Nikita Novikov (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P36
Working memory (WM) is the ability to temporarily maintain information about stimuli that are no longer present in the sensory systems. WM retention is associated with elevated firing rates in the neural populations that encode the memorized stimuli . Classically, WM is modelled as a bistable system with a background low-activity state and the high-firing rate state that corresponds to the memory being retained . Alongside with the firing ratess, oscillatory activity is also modulated during WM retention, notably one observes an increase of beta power in the stimulus-selective prefrontal populations . One hypothesis is that the beta oscillations stabilize the persistent WM, thereby preserving a status quo . However, the mechanisms for this network stabilization are not understood.
In this work, we propose a mechanism that allows to stabilize WM retention in the presence of distractors by a non-selective beta-oscillatory input. We consider two identical excitatory-inhibitory populations described by Wilson-Cowan like equations, the first one being selective to the stimulus (S), the second—to the distractor (D). The populations are bistable and coupled by mutual inhibition, so only one of them could be in the memory state at the same time. Both populations received beta-band oscillatory input. In our model, the memory state (as opposed to the background state) is associated with beta-band resonance. Consequently, the oscillatory input entrained a population only if it was in the memory state. Furthermore, oscillatory entrainment produced an increase of firing rate in the population, due to the non-linear properties of input–output relation of neurons.
Supported by Russian Science Foundation grant (No: 17-11-01273).
Goldman-Rakic PS: Cellular basis of working memory. Neuron 1995, 14(3):477–485.
Amit DJ, Brunel N: Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb Cortex 1997, 7(3):237–252.
Lundqvist, M, Rose, J, Herman, P, Brincat, SL, Buschman, TJ, Miller, EK: Gamma and Beta Bursts Underlie Working Memory. Neuron 2016, 90(1):152–164
Engel, AK, Fries, P: Beta-band oscillations—signalling the status quo? Curr Opin Neurobiol 2010, 20(2):156–165
P37 Artificial evolution of networks of artificial adaptive exponential neurons for multiplicative operations
Muhammad Khan, Borys Wrobel
Adam Mickiewicz University in Poznan, Evolving Systems Laboratory, Poznan, Poland
Correspondence: Borys Wrobel (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P37
This work was supported by the Polish National Science Center (project EvoSN, UMO-2013/08/M/ST6/00922). MAK acknowledges the support of the PhD programme of the KNOW RNA Research Center in Poznan (No. 01/KNOW2/2014). We are grateful to Volker Steuber and Neil Davey for discussions.
Rothman JS, Cathala L, Stuber V, Silver RA. Synaptic depression enables neuronal gain control. Nature 2009, 457, 1015–1018
Chance FS, Abbott LF, Reyes AD. Gain Modulation from Background Synaptic Input. Neuron 2002, 35, 773–782.
Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. Journal of Neurophysiology 2005, 94, 3637–3642.
Yaqoob M, Wróbel B. Very small spiking neural networks evolved to recognize a pattern in a continuous input stream. in 2017 IEEE Symposium Series on Computational Intelligence, 2017.
P38 Artificial evolution of very small spiking neural network robust to noise and damage for recognizing temporal patterns
Muhammad Yaqoob, Borys Wrobel
Adam Mickiewicz University in Poznan, Evolving Systems Laboratory, Poznan, Poland
Correspondence: Borys Wrobel (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P38
This work was supported by the Polish National Science Center (project EvoSN, UMO-2013/08/M/ST6/00922). MY acknowledges the support of the PhD programme of the KNOW RNA Research Center in Poznan (No. 01/KNOW2/2014). We are grateful to Volker Steuber and Neil Davey for discussions.
Wróbel B. Evolution of spiking neural networks robust to noise and damage for control of simple animats, in Parallel Problem Solving from Nature—PPSN XIV, 2016.
Brette R, Gerstner W. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity, Journal of Neurophysiology 2005, 94, 3637–3642.
Yaqoob M, Wróbel B. Very small spiking neural networks evolved to recognize a pattern in a continuous input stream, IEEE Symposium Series on Computational Intelligence, 2017.
Seattle University, Department of Mathematics, Seattle, WA, United States
Correspondence: Brian Fischer (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P39
Perceptual detection and estimation may be improved in the multisensory condition relative to the unisensory condition by combining evidence across sensory modalities. The integrative properties of multisensory neurons are often evaluated by comparing the relative strength of responses to unimodal and multimodal stimuli using the multisensory enhancement index . However, it remains unknown which features of multimodal neural responses lead to enhanced perceptual performance in multisensory conditions. Here we use a model for the neural implementation of Bayesian inference to reassess the commonly used multisensory enhancement index as a measure of multisensory integration. The non-uniform population code model describes how populations of neurons can perform Bayesian inference. It assumes that the population structure is matched to the statistics of the environment, where preferred stimuli are drawn from the prior distribution and the neural population response to a stimulus is proportional to the likelihood function. It has been shown that, in this framework, optimal cue combination is multiplicative at the population level . Specifically, a center-of-mass decoding of the population will approximate a Bayesian estimate when the population response to multiple stimulus cues is proportional to the produce of the responses to the individual cues. We show here that the mechanism used to implement multiplicative selectivity determines whether multisensory enhancement is correlated with performance enhancement. In the non-uniform population code, optimal multisensory integration only depends on the pattern of activity over the population, not the strength of the responses. Therefore, if neurons implement perfect multiplication of their inputs, multisensory enhancement will be unrelated to performance. If neurons implement an approximation multiplication, then multisensory enhancement may be correlated with performance enhancement. Specifically, we examined the responses of model neurons that use a sigmoid input–output transformation to perform approximate multiplication . In this network, both the accuracy of the multiplication and the enhancement index changed depending on where the input fell on the sigmoidal curve. Thus, multisensory enhancement was correlated with performance enhancement because it was correlated with the accuracy of the multiplication. Therefore, in this framework, multisensory enhancement may be correlated with, but is not causally related to performance enhancement. This work highlights the importance of using population measures to determine which features of multisensory neural responses lead to enhanced perceptual performance in multisensory conditions.
Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci. 2008;9: 255–266.
Fischer BJ, Peña JL. Optimal nonlinear cue integration for sound localization. J Comput Neurosci. 2017;42: 37–52.
Fischer BJ, Anderson CH, Peña JL. Multiplicative auditory spatial receptive fields created by a hierarchy of population codes. PLoS One. 2009;4: e8015.
University of Waterloo, Systems Design Engineering, Waterloo, Canada
Correspondence: Bryan Tripp (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P40
It is a significant challenge to develop physiologically grounded neural models that produce ethologically relevant behavior. Deep convolutional networks have potential in this direction, as they can perform fairly realistic visual processing. However, many aspects of their behavior, activity, and mechanisms are unrealistic. For example, although the architectures of well-known convolutional networks (e.g. ResNet) were inspired by primate visual cortex, they are dissimilar in their specific layers and connections, and in the statistics of these connections (e.g. distributions of sparseness and in-degree). If convolutional networks had physiologically realistic architectures, they could be compared more directly with the brain. In this direction, the current study optimizes convolutional network hyperparameters to produce networks that match various anatomical and physiological data. The networks have one-to-one homologies with primate visual areas. The main steps in this approach are as follows: (1) assemble data related to physiological network architecture (e.g. receptive field sizes in each area; fraction of inputs to each area that come from each source) from databases and literature; (2) find mathematical expressions for these network properties in terms of convolutional network hyperparameters; (3) define a cost function based on the difference between physiological parameters and corresponding network parameters; (4) find hyperparameters that minimize the cost. Care is needed in formulating the cost function, to ensure consistency between receptive field sizes and spatial resolution across converging paths. If the optimization step is successful, this procedure produces a convolutional network architecture that is driven by physiological data. A convolutional layer generally corresponds to a single layer of a single cortical area. Physiological parameters associated with a layer are the number of neurons (estimated from cell density, layer thickness, and cortical surface area), the number of extrinsic inputs per neuron (estimated from cell reconstructions), and receptive field sizes (estimated from electrophysiology studies). Parameters associated with inter-area connections include the fraction of neurons innervating each target that come from each source area, and the percentage of supragranular versus infragranular cells that contribute to these projections.
The optimization is a non-convex integer programming problem, a type of problem that can be difficult in general. However, good results have been consistently obtained so far by converting integer parameters to floating point numbers, optimizing with the Adam algorithm, and rounding the results. Work in progress includes accounting for varying degrees of certainty of different parameters, and testing networks derived with these methods on standard vision problems such as CIFAR-10. Open-source code is available from https://github.com/bptripp/calc. Future work will include application of this approach to a model of visually guided grasping. It is also hoped that the approach can be generalized to mouse and human networks.
This work is a step toward physiologically grounded neural models that produce ethologically relevant behavior. Multiple other steps are not addressed here, include developing realistic learning experiences. Ultimately, such models may lead to new insights into relationships between low-level mechanisms, representations, and behavior.
Jacqueline Hynes1, David Brandman2, John Donoghue1, Carlos Vargas-Irwin1
1Brown University, Department of Neuroscience, Providence, RI, United States; 2Brown University, Department of Engineering, providence, RI, United States
Correspondence: Jacqueline Hynes (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P41
The biological computations underlying sensory perception, cognition, and goal-directed movements are thought to emerge from interactions across large groups of cortical neurons. While it is now possible to record ever larger neural populations, detecting functional groupings of neurons and characterizing their computational operations has proven notoriously difficult. The problem of finding functional groupings is at one level statistical in nature, i.e., showing that neurons belonging to one group of neurons are significantly more similar in function than other neurons. In cortical areas where the tuning properties of individual neurons are well established, the task of grouping neurons according to their functional similarities could seem like a trivial matter (e.g., shared orientation tuning). However, a growing number of studies have shown that these simple tuning models are often task or context-dependent and will break down under more complex or ethologically relevant conditions. Although measures of coordinated spike activity have been widely used to infer the functional relationships between neurons, statistical and theoretical issues have limited the success of this approach. Here, we introduce a new approach for identifying NETworks of functionally SIMilar neurons (SIMNETS). Our approach is based on the premise that we can characterize the computation being performed by a neuron by examining the intrinsic relationship between the outputs (spike trains) it emits across different sets of inputs. We can represent these relationships using a pairwise distance matrix, where each entry represents the similarity between two spike trains. We refer to this as a ‘trial similarity matrix’ (TSM). Comparing the TSMs of simultaneously recorded neurons allows us to quantify the relationship between their computational properties. The SIMNETS algorithm involves: (i) calculating the similarities between different spike-train time-series generated by a single neuron on a neuron-by-neuron basis, (ii) calculating the correlation between the resulting TSMs to produce and NxN Correlation matrix (NCM), and (iii) using dimensionality reduction tools combined with agglomerative clustering techniques to identify neurons with similar functional properties within the NCM. We have tested the SIMNETS algorithm using synthetic data with known ground truth. Results show that SIMENTS can identify groups of neurons with similar computational properties, even if they use different encoding schemes (based on firing rate or precise spike timing). We also demonstrate that clustering performance is severely impaired using standard approaches that directly compare spike-trainsbetweendifferent neurons. To show the generality of the method, we applied SIMNETS to two publicly available datasets, including 112 primate V1 neurons recorded during the presentation of drifting gratings and a dataset of 80 rat hippocampal neurons during a navigation task. Our results shown that our algorithm can detect groups of functionally related neurons within these diverse neuronal populations. The SIMNETS framework provides a principled way to describe the relationship between neurons and determine if functional categories are present, without having to impose specific encoding modelsa priori. This data driven approach will greatly facilitate the analysis of networks of neurons engaged during complex natural behaviors.
Siwei Qiu, Carson Chow
National Institute of Health, NIDDK, Lab of Biological Modeling, Bethesda, MD, United States
Correspondence: Siwei Qiu (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P42
We study finite-size fluctuations in a deterministic coupled spiking neural network with nonuniform coupling. We generalize a previously developed theory of finite size effects for globally coupled neurons. In the uniform case, mean field theory is well defined by averaging over the network as the number of neurons in the network goes to infinity. However, for nonuniform coupling it is no longer possible to average over the entire network. We show that if the coupling function approaches a continuous function in the infinite system size limit then an average over a local neighborhood can be defined such that mean field theory is well defined. We then derive a perturbation expansion in the inverse system size around the mean field limit for the covariance of the synaptic drive. We also show that the fluctuations in the firing rate of a neuron cannot be computed perturbatively in a similar series.
Tristan Aft1, Sorinel Oprisan1, Mona Buhusi2, Catalin Buhusi2
1College of Charleston, Department of Physics and Astronomy, Charleston, SC, United States; 2Utah State University, Department of Psychology, Logan, UT, United States
Correspondence: Tristan Aft (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P43
Spatial and temporal dimensions are fundamental for orientation, adaptation, and survival or organisms. Hippocampus has been identified as the main neuroanatomical structure involved both in space and time perception. It has been hypothesized that hippocampus may be in fact involved in conceptual understanding of many other dimensions. The spatial position of an animal can be reliably decoded from the neuronal activity of several cell populations in the hippocampus. In particular, place cells in the hippocampus fire at only a few locations in a spatial environment, and the position of the animal can be readily read out from single active neurons. It has been recently found that some neurons in the hippocampus, called “time cells”, fire as time cells coding the time interval during a behavioral task. In this study we investigated the interval timing, i.e. the ability of perception and use of durations in the supra-second range. One important characteristic of interval timing is the scale invariance, i.e., the time-estimation error seems to linearly increase with the estimated duration. Scale invariance is extremely stable over behavioral, lesion, pharmacological, and neurophysiological manipulations. Scale invariance has been observed also across species from invertebrates to fish, birds, and mammals, such as mice, rats, and humans. Although the neuroanatomy of interval timing is still under debate, hippocampal lesions have been shown to affect peak time in peak-interval procedures. For example, dorsal hippocampal (DH) lesions produced leftward shifts in peak times while ventral hippocampal (VH) lesions produced a temporary rightward shift of peak times. We mathematically modeled the hippocampus memory of time as a random variable with a wide range of values around the desired criterion time. The key assumption of our study is that the hippocampus creates a topological map of durations, similar to the spatial map created by place cells. As a result, we successfully modeled peak shift due to the extent and location of the lesions and were able to identify the effect of lesions on scale invariance of interval timing.
Chang Sub Kim
Chonnam National University, Department of Physics, Gwangju, Republic of Korea
Correspondence: Chang Sub Kim (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P44
We cast the free energy principle (FEP) in the neurosciences following the principles of mechanics, which articulates that all living organisms are evolutionally self-organized to tend to minimize the sensory uncertainty about environmental encounters.The FEP is a recent endeavor trying to answer to `what is life?’, life is characterized by temporal regularity and self-adaptiveness, which may be encapsulated, in contemporary terms, into autopoiesis and enaction.The FEP suggests that the organisms implement minimization by calling forth the informational FE (IFE) in the brain and thatthe time-integral of the IFE gives an estimate of the upper bound of the sensory uncertainty .We propose that the minimization of the IFE must continually take place over a finite temporal horizon of an organism’s unfolding environmental event.Our scheme is a generalization of the conventional theory which approximates minimization of the IFE at each point in time when it performs the gradient descent computation . We adopt the Laplace-encoded IFE as an informational Lagrangian in implementing the variational FEP in the framework of the principle of least action (Hamilton’s principle) .And, by subscribing to the standard Newtonian dynamics, we consider the IFE a function of position and velocity as the metaphors for the organism’s brain variable and their first-order time derivative, respectively.The brain variable maps onto the first-order sufficient statistics of the probability density launched in the organism’s brain to perform Bayesian filtering of noisy sensory data called recognition dynamics (RD). In the ensuing Hamiltonian formulation, the RD prescribes momentum, conjugate to a position, as a mechanical measure of prediction error weighted by mass, the precision. We apply our formalism to a biophysically grounded model for neuronal dynamics by suggesting that the large-scale architecture of the brain be an emergent coarse-grained description of the interacting many-body neurons. The resulting RD is deterministic and hierarchical, which notably incorporates dynamics of both predictions and prediction errors of the perceptual states . Consequently, the detail of the neural circuitry from our formulation differs from those supported by the generalized filtering which generates only dynamics of predictions of the causal and hidden states, not their prediction errors . However, the general structure of message passing, namely descending predictions and ascending prediction errors in the hierarchical network, shows the close similarity.
Schrödinger E: What is Life? Mind and Matter, Cambridge: Cambridge Univ. Press: 1967.
Friston K (2010). The free-energy principle: a unified brain theory?Nature Review Neurosci, 11: 127–138.
C L, Kim C S, McGregor S, and Seth A K (2017). The free energy principle for action and perception: A mathematical review, Journal of Mathematical Psychology. 81: 55–79. http://dx.doi.org/10.1016/j.jmp.2017.09.004.
LandauLP and Lifshitz E M:Classical Mechanics, 3rd Edition, Amsterdam: Elsevier Ltd.1976.
Kim C S (2018). Recognition dynamics in the brain under the free energy principle. Neural Computation, submitted.https://arxiv.org/abs/1710.09118.
Friston. K, Stephan K, Li B, and Daunizeau J (2010). Generalized filtering. Mathematical problems in engineering. 261670.
P45 Simultaneous recording of micro-electrocorticography and local field potentials for decoding rat forelimb movement
Jinyoung Oh1, Soshi Samejima1, Abed Khorasani1, Adrien Boissenin1, Sam Kassegne2, Chet Moritz1
1University of Washington, Rehabilitation Medicine, Seattle, WA, United States; 2San Diego State University, Mechanical Engineering, San Diego, CA, United States
Correspondence: Jinyoung Oh (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P45
Introduction: In the development of a brain-computer interface, the type of signal recording method is crucial. To date, a closed-loop neural interface can improve hand and arm function for individuals by using intracortical recordings to control muscle stimulation. Here we explore whether using brain surface signals recorded via electrocorticography (ECoG) is sufficient for decoding forelimb movement and compare the results to simultaneous intercortical recordings of local field potentials (LFP) in rats.
Results: Figure 1B illustrates the filtered signal 200–400 Hz of both the µECoG electrode array and the intracortical array and demonstrates a high correlation between two signals (correlation coefficient r > 0.5). The decoding performance of µECoG was similar to LFP for this lever task in all three animals (LFP: r = 0.48 +- 0.05, µECoG: 0.45 +- 0.06, p > 0.05).
Discussion: Our results suggest that µECoG may replace intracortical LFP functionally when developing a closed-loop brain-computer interface that decodes forelimb movement. Less invasive recordings with less required power due to smaller recording frequency bandwidth will likely speed the development of a clinically viable closed-loop brain-spinal interface. In addition, signal processing must be kept efficient in order to process all signals on the implanted device. Here we find that µECoG processed as described above allows the decoding of forelimb movement in similar accuracy compared to LFPs. Computational efficiency may be a substantial advantage when designing the clinical neural devices to treat brain and spinal cord injury.
Gihan Weerasinghe1, Benoit Duchet1, Rafal Bogacz1, Christian Bick2
1University of Oxford, Nuffield Department of Clinical Neurosciences, Oxford, United Kingdom; 2University of Oxford, Mathematical Institute, Oxford, United Kingdom
Correspondence: Gihan Weerasinghe (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P46
Deep brain stimulation (DBS), as it is currently available, involves administering a constant frequency pulse train via electrodes implanted into the brain and is known to be an effective treatment for a variety of neurological disorders, including Parkinson’s Disease and Essential Tremor (ET). There is significant evidence to suggest that the ‘closed loop’ approach of delivering stimulation according to the ongoing symptoms of the patient has the potential to improve both the effectiveness and efficiency of the treatment. The success of closed loop DBS depends on being able to devise a stimulation strategy according to the measurable and quantifiable symptoms of the patient. A useful stepping stone towards this is to construct a mathematical model which can describe the dynamics of the oscillations in addition to describing how such oscillations should change as a result of applying stimulation. Our work focuses on the use of the Kuramoto model to describe tremor oscillations found in patients with ET. We show how this model can capture the basic dynamics of tremor oscillations found in such patients and then, using a reduced form of the Kuramoto model, we derive expressions which describe how a patient should respond to stimulation at a given phase and amplitude. We predict that, provided certain conditions are satisfied, the best stimulation strategy should be phase specific but also that applying stimulation at lower amplitudes should have a greater effect. We support this surprising prediction with some preliminary results obtained from ET patients. In light of our predictions, we also propose a new hybrid strategy which effectively combines two of the strategies found in the literature, namely phasic and adaptive DBS.
Frances Chance, Christina Warrender
Sandia National Laboratories, Department of Neural and Data-Driven Computing, Albuquerque, NM, United States
Correspondence: Frances Chance (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P47
While retinal circuitry may appear simple compared to higher-level areas of the brain, the retina contains a surprising diversity of retinal ganglion types (detecting motion, color, etc.) that perform an equally wide range of computations to “preprocess” visual input before transmission through the optic nerve. It is often assumed that specific retinal ganglion cell types are selective for visual features that are particularly useful for encoding visual stimuli (e.g. center-surround cells) or particularly relevant for an animal’s perceptual world (e.g. sensitivity for looming stimuli), comprising a behaviorally-relevant information channel encoding specific information about the visual environment. This research focuses on motion-sensitive retinal ganglion cells. Specifically, we ask which types of motion-sensitive cells perform best under challenging conditions, for example when the moving target is dim relative to the background or under noisy conditions and are particularly interested in understanding which ganglion cell types are best suited for incorporation into a neuromorphic system for specific visual tasks. We construct a number models of retinal ganglion cell types implicated in motion-processing, including direction-selective models, such as the Hassenstein & Reichardt model  or the Barlow-Levick model , as well as motion-sensitive cell types, such as the OMS (object-motion sensitive) cell  and the W3 cell [3–5]. We then examine the performance of these models at detecting varying visual stimuli over a range of conditions, including noise and jitter, and discuss strategies by which outputs of different cell types can best be combined to track moving targets in a visual scene. We then compare the effectiveness of these strategies on “real-world” videos of visual scenes.
Baccus SA, Ölveczky BP, Manu M, Meister M. J. Neuroscience 2008 28: 6807–6817.
Barlow HB, Levick WR. J. Physiology 178: 477–504.
Kim T, Kerschensteiner D. Cell Reports 2017, 19: 1343–1350.
Kim T, Soto F, Kerschensteiner D. eLife 2015, 4: e08025.
Zhang Y, Kim I-J, Sanes JR, Meister M. PNAS 2012, 109: E2391-E2398
Hassenstein B, Reichardt WZ. Naturforsch 1965. 11b: 513–524.
P48 Using information theory and a bayesian model to examine the factors that influence the decision to consume alcohol in a rodent model of alcoholism
Nicholas Timme, David Linsenbardt, Christopher Lapish
Indiana University-Purdue University, Department of Psychology, Indianapolis, IN, United States
Correspondence: Nicholas Timme (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P48
About 16 million Americans have been diagnosed with an alcohol use disorder and alcohol use costs the United States approximately 250 billion dollars a year. Therefore, identifying the factors that lead to excessive drinking and understanding the neural mechanisms of how they do so is a vital goal in neuroscience. In this study, we used information theory and Bayesian modelling techniques to examine both neural and behavioral signals that predict alcohol consumption in rodents. We performed in vivo electrophysiological recordings in the dorsal medial prefrontal cortex (mPFC, a brain region heavily involved in decision-making) of a validated rodent model of excessive drinking (alcohol preferring (P) rat) and a control rat line (Wistar) during a simple cued alcohol drinking task. We used dynamic information theory (mutual information) to examine changes in encoding of future drinking (intent) at multiple time points throughout this task by individual neurons. We found that P rats showed decreased intent encoding compared to Wistars when consuming alcohol, but similar intent encoding when consuming water. These results indicate that encoding of alcohol drinking intent is diminished in the mPFC in animals with a genetic risk for excessive drinking (P rats). Next, we used behavioral data and Bayesian modelling techniques to construct a logistic regression model incorporating behavioral variables to predict when these rodents would drink. Model coefficients for number of previous drinking bouts and distance to sipper were significant for many recordings indicating predictive power in determining if the animal would drink on a given trial. The model was able to predict future drinking well. For instance, receiver operating characteristics area under the curve was above 0.8 for 22 of 26 individual animal recordings and for all animals combined. These results indicate that the logistic regression model developed herein is capable of predicting future drinking in this experimental paradigm. Overall, these results identify key behavioral variables that influence the decision to consume alcohol and provide evidence that the neural processes underlying this decision-making process are fundamentally altered in excessive drinking animals. In future studies, we will continue to combine these techniques to examine encoding of behavioral signals and latent variables relevant to the prediction of drinking in other brain regions to more fully understand the key changes in information processing underlying maladaptive decision-making in alcohol use disorder.
Arthur Hung1, Chi Keung Chan2, Chuan-Chin Chiao3
1National Tsing Hua University, Department of Physics, Hsinchu, Taiwan, Province of China; 2Academia Sinica, Department of Physics, Taipei, Taiwan, Province of China; 3National Tsing Hua University, Department of Life Sciences, Hsinchu, Taiwan, Province of China
Correspondence: Arthur Hung (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P49
The task of the nervous system is to detect, compute, and make decisions in order to make responses that best suit the survival of the organism. Under evolutionary pressures, the organism evolves their nervous system to achieve high reliability and precision. Take the visual system for example: the in vitro experiment demonstrates that the retina can count single photons. However, the nervous system is also accompanied with various types of noises, such as synaptic conduction, thermal fluctuations of photosensitive molecules, as well as stochastic openings and closings of ion channels. Noise is usually regarded as a disturbance which lowers reliability and precision. However, there is a phenomenon in nonlinear physics known as “stochastic resonance (SR)” which states something entirely different: the presence of noise can enhance the detection of a weak sub-threshold signal. In this study, we studied the encoding process of a dynamic pattern under different contrast levels, and then examined if adding noise can enhance the information transfer when the contrast is low. We used in vitro electrophysiological recording and computer simulation to investigate how noise influences the encoding process of different light intensity patterns in the retina and tried to identify the relevant circuity components that would result in the enhancement. We generated our stimulus sequence by hidden markov model, and we used a gamma corrected LCD panel for light stimulation which focused on the photoreceptor layer of the retina by a microscope lens and calibrated by a separate digital microscope on the top with an amplified photodiode. The extracellular recording was conducted by using 64 channels (electrodes) MEA with the diameter of each electrode 10 mmor 30 mm, and 200mm apart ordered in a square fashion to measure the spiking pattern (action potentials) of the retinal ganglion cells. Time shifted mutual information analysis (Shannon information) was performed for different contrast conditions to quantify information transfer. We found that lower the contrast is, lower the peak height of time shifted mutual information would be, and this scaling was nonlinear. There were different patterns and shapes of this time shifted mutual information. Roughly, there were two kinds of the pattern: (1) single peak, and (2) double peak, and that lowering the contrast would not change the peak location in time and the shape of it, just the peak height. This result gives us insights about what the limitation of the encoding/detection process is. We are in the process of adding spatial uniform or non-uniform noise in the sub-threshold contrast level and test SR directly. As for the effect of noise, we performed a simulation using FitzHugh–Nagumo model and the input was a periodic sine wave, the results showed that adding noise can indeed enhance the phase lock ability of the cells.
P50 Diverse dynamics in small recurrent networks: A case study of coupled recurrent and coupled inhibitory neurons
Pei Hsien Liu1, Cheng-Te Wang2, Alexander White3, Tung-Chun Chang4, Chung-Chuan Lo3
1National Tsing Hua University, Interdisciplinary Program of Engineering, Hsinchu City, Taiwan, Province of China; 2National Tsing Hua University, Institute of Bioinformatics and Structural Biology, Hsinchu, Taiwan, Province of China; 3National Tsing Hua University, Institute of Systems Neuroscience, Hsinchu, Taiwan, Province of China; 4Academia Sinica, Institute of Information Science, Taipei, Taiwan, Province of China
Correspondence: Pei Hsien Liu (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P50
Amongst many studies of small circuits and brain network topologies, recurrently connected neurons are universally found in the brain of most species, including primates, rodents and insects. These recurrent circuits have been suggested to play multiple roles in brain functions. Indeed, from an evolutionary point of view, it is cost effective for the nervous systems to develop a “Swiss army knife” solution, in which a small set of neural circuit motifs are able to perform a variety of functions. However, the exact structure of these circuits as well as how they give rise to the diverse functions is still unclear. Some of the known functions include robustness, balancing of excitation and inhibition, decision making, oscillations and memory. In this project, we systematically studied the functions of a class of recurrently connected microcircuits, using a computational modeling approach. We first identified four-node motifs that are abundant in the current Drosophila connectome (around 22,835 neurons) in comparison to random networks. Two approaches are then employed to study the functionalities of the over-represented circuits: a dynamical and an information-theoretical one. For the dynamical approach, our analysis demonstrated that one of the most abundant motifs exhibits diverse functionalities, including working memory, decision making, flip-flop switching and oscillation. For the information-theoretic approach, we obtained a rudimentary set of metrics that partially reflects the system’s dynamics without information about the details of parameters, including the distribution of firing rate Fano factor and ISI Fano factor. This can serve as a reference for experimentalists who wish to understand emergent properties that arises from the interconnection of neurons but do not know the precise parameters or inputs. In summary, our research reveals the potential functions of a class of small recurrent circuits, and provide insights into the canonical architecture of the nervous systems.
P51 Morpho-electric properties and computational simulation of human dentate gyrus granule cells from the epileptogenic hippocampus
Anatoly Buchin1, Rebecca de Frates1, Peter Chong1, Rusty Mann1, Jim Berg1, Ueli Rutishauser2, Ryder Gwinn3, Staci Sorensen1, Jonathan Ting1, Costas A. Anastassiou1
1Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 2Cedars-Sinai Medical Center, California Institute of Technology, Los Angeles, CA, United States; 3Swedish Medical Center, Seattle, WA, United States
Correspondence: Anatoly Buchin (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P51
Epilepsy is the fourth most common neurological disease characterized by unpredictable seizures interrupting normal brain function. Despite considerable advances in the treatment and diagnosis of seizure disorders, about 40% of patients remain pharmacoresistant . Seizures are often correlated with hippocampal sclerosis , which is classified by Watson Grade (WG) ranging from 0 to 5, from less to more severe case . To elucidate mechanisms underlying epileptogenesis in human hippocampus we use an in vitro workflow to study the excitability of hippocampal neurons in tissue slices from specimens excised during brain surgery for the treatment of focal, pharmacoresistant epilepsy. We systematically analyzed the morphological and electrophysiological properties of human hippocampal dentate gyrus granule cells with different degree of hippocampal sclerosis (WG1 vs. WG4). We find that spiking properties such as f-I curve and spike-frequency adaptation are correlated with WG, while passive properties such as input resistance and the resting potential are not. The majority of morphological properties of single-neurons do not correlate with the degree of hippocampal sclerosis, further pointing to an excitability difference as the most prominent single-neuron biomarker. To test the implications of the observed differences under realistic scenarios we develop biophysically detailed computational models of granule cells with active dendrites that reproduce key electrophysiological features of human hippocampal granule cells as function of WG. Using these models we explored relevant scenarios associated with hippocampal sclerosis and the propensity towards seizure initiation.
Fisher R, Boas W, Blume W, et al. Epileptic Seizures and Epilepsy: Definitions Proposed by the International League Against Epilepsy (ILAE) and the International Bureau for Epilepsy (IBE).
Cendes F, Cook M, Watson C, et al. Frequency and characteristics of dual pathology in patients with lesional epilepsy.
Watson C, Nielsen SL, Cobb C, et al. Pathological grading system for hippocampal sclerosis: correlation with magnetic resonance imaging-based volume measurements of the hippocampus.
P52 Development of realistic single-neuron models of mouse V1 capturing in vitro and in vivo properties
Yina Wei, Anirban Nandi, Costas A. Anastassiou
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Yina Wei (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P52
The Allen Institute for Brain Science has a mission to understand cortical computations related to the rodent visual pathway. Part of this approach is to develop accurate single-neuron models that capture basic observations across multiple spatiotemporal scales . The Allen Cell Types Database reports an approach for generating single-neuron models from 3D morphologies and somatic electrophysiological recordings. While informative, these models are limited to observations at the soma whereas approx. 95% of neural surface area is along their dendrites. Improper characterization of these dendrites can lead to gross distortion of their synaptic integration capabilities given the main postsynaptic target is along dendritic cables (especially of excitatory synapses). To overcome these challenges, we develop a model generation workflow based on experimental data from two modalities:in vitro somatic intracellular recordings from slice experiments and in vivo extracellular recordings of single units from behaving rodent experiments. Especially regarding the latter data modality, a novel extracellular probe called Neuropixels offers the ability to measure extracellular action potential (EAP) signatures from multiple (up to 10) contacts in vivo . We use these EAP signatures extending over hundreds of μm as a constraint for modeling various passive and active dendritic properties . Specifically, we modify the fitness function of the genetic algorithm within our optimization framework to include extracellular features, such as the amplitude and the width of the backpropagating EAP, extracted from multi-channel recordings in freely moving animals alongside intracellular features at the soma. We evaluate the two models with regards to their goodness-of-fit against in vitro and in vivo data for excitatory and inhibitory cell classes and show how adding in vivo dendritic features to the optimization contributes to capturing key intracellular and extracellular observables. These results lay the groundwork towards a powerful modeling approach leveraging the rich data set at our disposal.
Hawrylycz M, Anastassiou C, Arkhipov A, Berg J, Buice M, Cain N, Gouwens NW, Gratiy S, Iyer R, Lee JH et al.: Inferring cortical function in the mouse visual system through large-scale systems neuroscience. Proc Natl Acad Sci U S A 2016, 113(27), 7337–7344.
Jun JJ, Steinmetz NA, Siegle JH, Denman DJ, Bauza M, Barbarits B, Lee AK, Anastassiou CA, Andrei A, Aydin C et al.: Fully integrated silicon probes for high-density recording of neural activity. Nature 2017, 551(7679), 232–236.
Gold C, Henze DA, Koch C: Using extracellular action potential recordings to constrain compartmental models. J Comput Neurosci 2007, 23(1), 39–58.
P53 A multi-modal discovery platform toward studying mechanisms-of-action of electric brain stimulation
Fahimeh Baftizadeh1, Soo Yeun Lee1, Sergey Gratiy1, Taylor Cunnington2, Shawn Olsen1, Costas A. Anastassiou1
1Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 2University of Washington, Seattle, WA, United States
Correspondence: Fahimeh Baftizadeh (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P53
There has been increased interest and progress in the field of neural prosthetics with many exciting efforts focusing on the development of brain-machine interfaces to drive sensory devices (e.g. retina, cochlea) or motor devices (e.g. prosthetic limbs). Yet, with a few notable exceptions, employment of neuro-prosthetics for monitoring and intervention in cognitive physiology and pathologies has remained limited. Concurrently, cognitive impairment has proven to be among the least tractable and most disabling in a wide variety of brain disorders including autism, epilepsy, depression and schizophrenia. Despite the intense interest and potential regarding the use of electrical stimulation in cognitive disorders, there is still today a debilitating lack of understanding about where, when and how to inject current into cortical circuits to modulate higher-level brain processing. At the same time, the Allen Institute for Brain Science has developed a large-scale approach for the robust and reproducible deconstruction of cortical circuitry toward understanding how the interplay of components gives rise to high-level processing . We use a similar approach to address the challenges linked to understanding and predicting brain stimulation effects with the primary goal being to tackle the fundamental question of how to inject current to transform the specificity and capability of electrical stimulation devices either in open- or closed-loop mode to ameliorate cognitive disorders. Specifically, using a combination of biophysically detailed simulation workflow (allowing the exploration and permutation of key parameters in electrical stimulation entrainment) in parallel with novel, multi-patch brain slice experiments we seek to understand electric field effects at the single-neuron level and how different parameters such as the distance from the electrode or stimulation characteristics impact sub-threshold and spiking responses of single neurons. We use this platform to generate novel insights toward significantly refining brain stimulation techniques in therapeutic neuroscience research.
Koch C, Reid RC. Observation of the mind, Nature 2012, 7, 1–14.
Chaitanya Chintaluri1, Marta Kowalska2, Michał Czerwiński2, Władysław Średniawa2, Joanna Jędrzejewska-Szmek2, Daniel Wójcik2
1University of Oxford, Centre for Neural Circuits and Behaviour, Oxford, United Kingdom; 2Nencki Institute of Experimental Biology of PAS, Laboratory of Neuroinformatics, Warsaw, Poland
Correspondence: Daniel Wójcik (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P54
Extracellular potential in the brain reflects the activity of transmembrane currents of neural and glial cells. The long range of the electric field leads to significant correlations between recordings at distant sites, complicating the analysis. Reconstructing the Current Source Density (CSD) which is the local origin of the potential facilitates data interpretation. In 2012 we introduced Kernel Current Source Density method (KCSD), a model-based reconstruction method, which allows source estimation from arbitrary distribution of electrodes. The method is also guarded against over-fitting by constraining complexity of the inferred CSD model. Here we revisit the method on the occasion of a new open Python implementation which includes new functionality and several additional diagnostic tools as compared to the original. The goal of this presentation is to advertise the method, the new implementation, and the new diagnostics available. Specifically we (1) analyze spectral properties of the method; (2) introduce error maps to investigate accuracy of the reconstruction; (3) introduce L-curve for estimation of optimal reconstruction parameters. The new implementation allows to perform reconstruction for 1D, 2D, and 3D setups, assuming sources distributed in the whole tissue, inside a slice, or on single cells when the cell morphology is available and the potential comes from that cell. The toolbox accompanied by a tutorial Jupyter notebook is available at https://github.com/Neuroinflab/kCSD-python
Anthony Burkitt1, David Grayden1, Hamish Meffin2, Omid Monfared1, Bahman Tahayori1, Dean Freestone3, Dragan Nesic4
1University of Melbourne, Department of Biomedical Engineering, Parkville, Australia; 2National Vision Research Institute, Carlton, Australia; 3University of Melbourne, Department of Medicine, Parkville, Australia; 4University of Melbourne, Department of Electrical & Electronic Engineering, Parkville, Australia
Correspondence: Hamish Meffin (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P55
Knowledge of electrical properties of neural tissue, such as conductivity, is important in various applications such as therapeutic electrical stimulation of the nervous system and electrical impedance tomography. It is also essential for the interpretation of intrinsic electrical signals in neuroscience such as single and multi-unit activity, the local field potential and electroencephalogram. It is usually assumed that neural tissue can be described by a locally homogeneous conductivity that captures the bulk properties of heterogeneous cellular microstructure. However, the cellular structure of tissue creates a complex partition of intra- and extra-cellular spaces that are separated by a high impedance membrane. These microstructural inhomogeneities lead to complicated current paths through the tissue, invalidating assumptions that allow a description based on a simple conductivity.
Here, we review our recent work that begins with the underlying heterogeneous microstructure of neural tissue and derives its bulk electrical properties in the form of the tissue admittivity, which generalized the usual conductivity [1–4]. A novel aspect of the admittivity is that it has both spatial and temporal spectral frequency dependence. New expressions are given for the admittivity of several tissue types including isotropic tissues with fibers oriented randomly in all (three-dimensional) directions and laminar tissues types with fibers oriented randomly within planes that are stacked upon each other. The spatio-temporal spectral frequency dependence of the tissue admittivity leads to non-trivial spatiotemporal electrical filtering properties of neural tissue, which we illustrate here. First, we show how a variation in a temporal parameter, namely applied pulse-width, can affect a spatial property like the profile of the extracellular potential. Second, we showed that, for tissue with a homogeneous structural anisotropy, variation in a spatial variable, namely distance from the electrode, can nonetheless affect the degree of electrical anisotropy.
OM acknowledge support from a NICTA/Data61 postgraduate research scholarship. HM acknowledge support from the Australian Research Council Centre of Excellence for Integrative Brain function.
Meffin H, Tahayori B, Grayden DB, Burkitt AN. Modeling extracellular electrical stimulation: I. Derivation and interpretation of neurite equations, J. Neural. Eng. 2012, 9(6), Art# 060505.
Tahayori B, Meffin H, Dokos D, et al. Modeling extracellular electrical stimulation: II. Computational validation and numerical results, J. Neural. Eng. 2012, 9(6), Art# 060506.
Meffin H, Tahayori B, Sergeev EN, et al. Modelling extracellular electrical stimulation III: Derivation and interpretation of neural tissue equations, J. Neural Eng, 2014, 11(6), Art# 065004.
Tahayori B, Meffin H, Sergeev EN, et al. Modelling extracellular electrical stimulation IV : Effect of the cellular composition of neural tissue on its spatio-temporal filtering properties, J. Neural Eng. 2014, 11(6), Art# 065005.
Brittany Baker, Duane Nykamp
Univeristy of Minnesota, School of Mathematics, Minneapolis, MN, United States
Correspondence: Brittany Baker (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P56
We have developed a network model where one can independently modulate both local and global features of the network connectivity. Our application of local microstructures is based on the SONET model , where one can specify the frequencies of different two-edge motifs in the network. We have extended this approach to allow for the inclusion of global structure in the patterns of connections, such as connections based on an underlying geometry. Using this model, we investigated how the influence of microstructure (motifs) on the emergence of synchronous events is modulated by spatial features of the network.
Zhao L, Beverlin II B, Netoff T, Nykamp DQ. Synchronization from second order network connectivity statistics. Frontiers in Computational Neuroscience, 5(28), 2011. https://doi.org/10.3389/fncom.2011.00028.
Max Nolte, Michael Reimann, James King, Henry Markram, Eilif Muller
École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland
Correspondence: Max Nolte (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P57
The combined impact of cellular noise sources and network dynamics on the intrinsic variability of cortical activity is not known. We quantified this variability by analyzing how somatic membrane potentials in simulations of neocortical microcircuitry with biological noise sources diverged from identical initial conditions. By selectively disabling noise sources, we found that any combination of noise or subthreshold perturbations causes chaotic divergence of membrane potentials with similarly high steady-state variability. However, the rate at which membrane potentials diverged depended on which noise sources were active, with synaptic noise dominating the rate. We found that, in spite of this high intrinsic variability, thalamocortical inputs can overcome chaotic network dynamics to produce reliable spike timing. However, synaptic noise causes a substantial residual spike-timing variability, and the rate by which this evoked activity diverges is similar to spontaneous activity. Thus, any mechanism of reliable cortical coding must be robust to the limits set by the interplay of synaptic noise and chaos.
Taylor Newton, Juan Hernando, Jafet Villafranca Díaz, Stefan Eilemann, Grigori Chevtchenko, Henry Markram, Eilif Muller
École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland
Correspondence: Taylor Newton (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P58
Michele Migliore1, Lida Kanari2, James King2, Szabolcs Kali3, Henry Markram2, Armando Romani2, Nicolas Antille2, Luca Leonardo Bologna5, Julian Martin Leslie Budd5, Jean-Denis Courcol2, Adrien Devresse2, Andras Ecker2, Joanne Falck6, Cyrille PH Favreau2, Michael Gevaert2, Attila Gulyas5, Olivier Hagens2, Juan Hernando2, Silvia Jimenez2, Sigrun Lange7, Carmen Alina Lupascu1, Rosanna Migliore1, Maurizio Pezzoli2, Srikanth Ramaswamy2, Christian A Rössert2, Sara Sáray5, Ying Shi2, Werner Alfons Hilda Van Geit2, Liesbeth Vanherpe2, Tamas Freund5, Audrey Mercer7, Alex M Thomson7, Eilif Muller2
1Institute of Biophysics, National Research Council, Palermo, Italy; 2École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland; 3Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary; 4University College London & Deutsches Zentrum für Neurodegenerative Erkrankungen, Germany; 5University College London, United Kingdom
Correspondence: Michele Migliore (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P59
We present a full-scale cellular level model of the CA1 area of the hippocampus of a rat. The model is built using a bottom-up data-driven workflow, along the same lines followed to implement a cortical column . Starting from a set of reconstructed morphologies for primary morphologically defined cell types, associated electrophysiological traces, and data-driven channel kinetics, we implemented biophysically accurate neurons models consistent with statistics of features extracted from the experimental traces. A virtual volume  was populated according to densities and proportions determined in . The neurons we connected according to the approach previous developed for the neocortex , and the resulting connectivity and synaptic properties were validated against a number of experimental findings. The current release is composed by 42 types of neuron (24 excitatory and 18 inhibitory) divided into 13 morphological types, 17 morpho-electrical types, 156 potential pathways, and 7 intrinsic synapse types. Simulations of the network show interesting emergent properties, such as theta oscillations in a LFP-like signal. The oscillations emerge from the intrinsic connectivity of the CA1 circuit driven by the spontaneous miniature events without any external input, as observed experimentally . Furthermore, the network activity propagates along the septo-temporal axis, consistently with what has been observed experimentally . Phenomena like oscillations and traveling waves in the theta rhythm range can play important roles in shaping the hippocampus function, but their mechanisms are not completely understood. The full-scale CA1 model represents an important tool to shed light on the cellular mechanisms behind such phenomena, elucidate the physiological conditions in which they can occur, and eventually reveal their role in the brain.
Markram H, et al., 2015, Cell 163:456–92.
Ropireddy D, et al., 2012, Neurosci. 205:91–111.
Bezaire MJ, et al., 2016, Elife Dec 23;5.
Reimann MW, et al., 2015, Front Comput Neurosci.9:120
Goutagny R, et al., 2009, Nat Neurosci. 12:1491–3.
Lubenov EV, Siapas AG. 2009, Nature 459:534–9.
P60 The SONATA data format: A new file format for efficient description of large-scale neural network models
Kael Dai1, Yazan Billeh1, Jean-Denis Courcol2, Sergey Gratiy1, Juan Hernando2, Adrien Devresse2, Michael Gevaert2, James King2, Werner Alfons Hilda Van Geit2, Daniel Nachbauer2, Arseny Povolotskiy2, Anton Arkhipov1, Eilif Muller2
1Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 2École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland
Correspondence: Kael Dai (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P60
Increasing computing power and availability of high-performance computing (HPC) resources have made it easier for neuroscientists to simulate and visualize large-scale brain network models.
However, one bottleneck for scientists developing, researching and sharing large-scale networks is the lack of efficient data formats to describe such models. A widespread practice is to represent models with simulator specific code such as hoc, SLI or python. XML based formatshave been proposed as a solution.But the use of XML quickly becomes problematic when scaling up to large realistic networks. Thus, an open specification is needed that is compact, computationally fast, yet also easy to read and edit. To meet these demands, the Allen Institute (AI) and the Blue Brain Project (BBP) have jointly developed the SONATA (Scalable Open Network Architecture TemplAte) Data Format—an open-source framework for representing neuronal circuits. The framework utilizes both organizations’ expertise with large-scale HPC network simulation, visualization and analysis. It was designed for memory and computational efficiency, as well as to work across multiple platforms. Even though AI and BBP use different approaches to modeling and use different tools, the format allows networks built by one institute to be simulated by the other and vice versa. We provide the specification documentation, open-source reference APIs, and model and simulation output examples with the intention of catalyzing support and adoption of the format in the modeling community. The specification describes a format for representing nodes (cells) and edges (synapses/junctions) of a network. It uses table-based data structures, hdf5 and csv, to represent nodes, edges and their respective properties. Furthermore indexing procedures for fast and parallelizable lookup of individual nodes and edges. The use of hdf5 provides both efficiency in space and read-time. The format includes specifics properties and naming conventions, but also allows modelers to extend node and edge model properties as they desire, to ensure models can be used with a variety of simulation frameworks and use cases. Besides network representation, saving the output of large-scale network simulations presents formidable challenges. The output format must not only be standardized for reproducibility and analysis across teams, but also optimized for memory and read/write performance. The data format architecture we present here offers solutions to both problems. A systematic schema for describing simulation reports makes it easy for users to exchange their data, and moreover the underlying hdf5 based format permits efficient storage of variables like spike times, membrane potential, and Ca2+ concentration. Lastly, to bring together network models, simulation output, and various run-time conditions (duration, time step, temperature, etc.), the specification includes aJSON-based fileformat for configuring simulations, including specifying variables to record from, and stimuli to apply. This will help reduce the guesswork normally needed to reproduce and adjust other organization’s simulations. The rapid advancement in neuroscientific data generation, large-scale data-driven modeling, and simulation capabilities makes the development of standards for network simulations necessary. The SONATA Data Format and framework are open to the community to use and build upon with the goal of achieving such a standard data format.
P61 Stability of synaptic weights in a biophysical model of plasticity in the neocortical microcircuit without explicit homeostatic mechanisms
Michael Reimann, Giuseppe Chindemi, Henry Markram, Eilif Muller
École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland
Correspondence: Michael Reimann (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P61
Spike-timing dependent synaptic plasticity has been characterized on a pairwise level in vitro. However, many of the identified forms of plasticity are inherently unstable in recurrent networks. For example, for hebbian-style plasticity the strengthening of a connection increases the likelihood that it will be strengthened further, leading to runaway potentiation. Homeostatic mechanisms have been proposed to stabilize the system, but physiological evidence for them remains indirect and inconclusive. For a morphologically detailed model of a cortical microcircuit in conjunction with a biologically constrained, calcium-based model of plasticity we characterized the stability of plastic connectivity in a population of neurons in the absence of an explicitly homeostatic mechanism. We explored the evolution of the strengths of 24 million recurrent glutamatergic synapses and their stability under in vivo-like conditions with simulated external input. We found that while individual synapse weights evolved significantly, there was a remarkable degree of stability in terms of average synaptic strength both on the single cell and population level. We then further characterized how the observed shift of synaptic strength between individual synapses affected the response properties of neurons, such as their average firing rates or their selectivity for individual stimuli and observed an increase in both for neurons in cortical layer 5.
Giuseppe Chindemi1, James King1, Srikanth Ramaswamy1, Michael Reimann1, Christian A Rössert1, Werner Alfons Hilda Van Geit1, Henry Markram1, Vincent Delattre1, Adrien Devresse1, Michael Doron2, Jeremy Fouriaux1, Michael Graupner3, Pramod Kumbhar1, Max Nolte1, Rodrigo Perin3, Fabien Delalondre1, Idan Segev2, Eilif Muller1
1École Polytechnique Fédérale de Lausanne, Blue Brain Project, Lausanne, Switzerland; 2Hebrew University of Jerusalem, Department of Neurobiology, Jerusalem, Israel; 3Université Paris Descartes, Laboratoire de Physiologie Cérébrale—UMR 8118, CNRS, Paris, France; 4École Polytechnique Fédérale de Lausanne, Laboratory of Neural Microcircuitry, Lausanne, Switzerland
Correspondence: Giuseppe Chindemi (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P62
Synaptic connections in the brain form a highly dynamical map, constantly adapting to external stimuli and internal dynamics: new connections can be formed, while existing ones can be modified or eliminated throughout the entire life of the organism. This adaptability of synaptic connections is referred to as “synaptic plasticity” and it is thought to be the foundation of learning and memory. Despite all the interest of the scientific community, experimental and theoretical work on synaptic plasticity is highly fragmented: only a few connection-types have been characterized experimentally, i.e. those between layer 5 thick-tufted pyramidal cells in the neocortex, and no model so far has been able to reconcile this sparse body of data. In this work, we integrated state of the art data and theories on synaptic plasticity to design a unifying model of a plastic glutamatergic synapse in the neocortex. In particular, we extended a previous calcium-based model of spike-timing dependent plasticity (STDP)  to account for more detailed synaptic dynamics: stochastic vesicle release, accurate NMDAR- and VDCC-mediated calcium currents, postsynaptic calcium accumulation and clearance, and timescales of plasticity expression. Parameters of the model were then constrained to reproduce in vitro STDP data from layer 5 thick-tufted pyramidal cells in the somatosensory cortex . The optimized parameters were than applied to all other excitatory connections in the same brain area, with the sole exception of the potentiated over depressed synapse ratio, re-calculated for each connection-type to match the expected mean release probability . We successfully validated our generalization approach against independent plasticity data on layer 2/3 to layer 5 pyramidal connections , layer 2/3 to layer 2/3 pyramidal connections and layer 4 to layer 4 spiny stellate connections . Our results show that the biophysics of synaptic transmission and the spatial extent of neuronal morphologies play a crucial role for synaptic plasticity, due to their influence on the magnitude and time course of postsynaptic calcium transients. Furthermore, we demonstrated how a few data points are required to parametrize a large and heterogeneous set of connections, hinting that only a small set of targeted in vitro experiments could be necessary to completely characterize the features of synaptic plasticity in the brain.
Graupner M, Brunel N. Calcium-Based Plasticity Model Explains Sensitivity of Synaptic Changes to Spike Pattern, Rate, and Dendritic Location. Proceedings of the National Academy of Sciences 2012, 109 (10): 3991–96.
Markram H, Lübke J, Frotscher M, Sakmann B. Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs. Science 1997, 275 (5297): 213–15.
Markram H, Muller E, Ramaswamy S, et al. Reconstruction and Simulation of Neocortical Microcircuitry. Cell 2015, 163 (2): 456–92.
Per Jesper S, Häusser M. A Cooperative Switch Determines the Sign of Synaptic Plasticity in Distal Dendrites of Neocortical Pyramidal Neurons. Neuron 2006, 51 (2): 227–38.
Egger V, Feldmeyer D, Sakmann B. Coincidence Detection and Changes of Synaptic Efficacy in Spiny Stellate Neurons in Rat Barrel Cortex. Nature Neuroscience 1999, 2 (12): 1098.
Jimin Kim, Eli Shlizerman
University of Washington, Electrical Engineering & Applied Mathematics, Seattle, WA, United States
Correspondence: Jimin Kim (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P63
Suh Woo Jung1, Jeffrey Riffell2, Eli Shlizerman1
1University of Washington, Department of Electrical Engineering, Seattle, WA, United States; 2University of Washington, Department of Biology, Seattle, WA, United States
Correspondence: Suh Woo Jung (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P64
Although it is widely known that dopamine (DA) neurons play critical roles in associative learning, the mapping of neurons and their effect on learning still remains unclear. In olfactory learning, it has been shown that superfusion of dopamine on mosquito brain strongly modulates activities of antennal lobe (AL) neurons. We therefore study neural population coding of mosquitoes AL projection neurons subject to dopamine modulation.
Pietro Quaglio1, Sonja Gruen1, Alper Yegenoglu2, Emiliano Torre3
1Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Jülich, Germany; 2Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6), Jülich, Germany; 3ETH Zürich, Chair of Risk, Safety and Uncertainty Quantification, Zürich, Switzerland
Correspondence: Pietro Quaglio (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P65
In 1949 Hebb  proposed cell assemblies, i.e. groups of interacting neurons, as building blocks of information processing in the cortex. A signature of an active cell assembly in parallel spike recordings are synchronous or spatio-temporal spike patterns (STPs) [2, 3]. Modern electrophysiological techniques enable the simultaneous recording of hundred(s) of neurons and thereby increase the chances to observe active cell assemblies.
In two recent publications we developed a method, called SPADE, to detect statistically significant spike patterns in massively parallel spike data (MPST), where 100 or more parallel spike trains are available. The method was first limited to synchronous spikes , and then extended to spatio-temporal patterns . The method reduces the computational costs for extraction of all possible repeated spike patterns by employing frequent itemset mining. To avoid a massive multiple testing problem it reduces the number of pattern candidates by pooling patterns with the same number of neurons and number of occurrences. SPADE then evaluates the statistical significance of the found patterns using a non-parametric Monte-Carlo sampling under the null hypothesis of independence. Finally, significant patterns are tested for conditional significance against each other. In  we applied SPADE to search for repeated synchronous patterns in MPST from electrophysiological data recorded from motor and premotor cortex of macaque monkeys. The monkeys performed a delayed reach-to-grasp task, where they had after a preparatory period to pull and hold an object using a side or precision grip and with high or low force. The recorded data were analyzed for the occurrence of significant synchrony in different behavioral epochs. We found a variety of significant synchronous patterns with high specificity to behavior. Here we present the challenges that such data pose when aiming to detect significant STPs and how this can be addressed by deploying SPADE. In particular we extend the statistical evaluation to test separately patterns of different temporal length, because otherwise the statistic presents a bias in favor of shorter patterns. By doing so, we now complement the previous results with the information provided by STPs. We analyze pattern compositions in terms of involved neurons and temporal arrangements in relation to behavior, confirming the expectation that extending the search to STPs increases the chance to detect patterns involving a larger number of neurons. In conclusion we show that the majority of the found spatio-temporal patterns is temporally locked to the movement onset and exhibit different neuronal composition for different grip modalities (precise grip or side grip).
Hebb (1949) The organization of behavior. Wiley & Sons
Singer W, Engel AK, Kreiter AK, et al. Neuronal assemblies: necessity, signature and detectability. Trends in Cognitive Sciences 1997, 1, 252–261
Harris KD. Neural signatures of cell assembly organization. Nature Reviews Neuroscience 2005, 5, 339–407
Torre E, Picado-Muino D, Denker M, et al. Statistical evaluation of synchronous spike patterns extracted by frequent item set mining. Frontiers in Computational Neuroscience 2013, 7:132
Torre E, Quaglio P, Denker M, et al. Synchronous Spike Patterns in Macaque Motor Cortex during an Instructed-Delay Reach-to-Grasp Task. Journal of Neuroscience 2016, 36(32), 8329–8340
Quaglio P, Yegenoglu A, Torre E, et al. Detection and Evaluation of Spatio-Temporal Spike Patterns in Massively Parallel Spike Train Data with SPADE. Frontiers in Computational Neuroscience 2017, 11
Georgios Detorakis1, Travis Bartley2, Emre Neftci1
1University of California, Irvine, Department of Cognitive Sciences, Irvine, CA, United States; 2University of California, Irvine, Department of Electrical Engineering & Computer Science, Irvine, CA, United States
Correspondence: Georgios Detorakis (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P66
Many computational models and widely used learning algorithms, such as back-propagation  (BP), require a form of bidirectional synaptic weights. Xie and Seung  have shown that BP, under some circumstances, can be equivalent to the Contrastive Hebbian Learning (CHL) algorithm. CHL has been proposed to explain biological phenomena such as hippocampal replay, where neural activity is transmitted back-and-forth between the hippocampus and the prefrontal cortex. CHL uses the transpose of the synaptic matrix to form a reverse connection between layers to account for weight changes in the forward connection. However, there is no evidence that cerebral areas talk to each other in a direct bidirectional way (weight transport problem). This means that using neural networks with symmetric synaptic weights is not biologically plausible. In this work, we propose an alternative mechanism that enables CHL without the use of symmetric weights for feedback transmission. The proposed mechanism is not solely based on synaptic plasticity but exploits the dynamics of neurons in combination with a Hebbian learning rule. We combine a recently proposed random back-propagation algorithm  with CHL. As with CHL, the neural network is trained in two phases, but in the reconstruction phase, feedback to previous layers is done using fixed random matrices. The proposed learning scheme uses continuous non-linear ordinary differential equations to describe the neural dynamics of the model. The layers of the feed-forward and the feedback subnetworks are treated as coupled neural systems, meaning that the information can be transmitted in a synchronous or asynchronous way without affecting the overall computation, as long as there is enough time for the individual dynamics to reach their corresponding equilibria. The current algorithm embeds dynamics from both the input and the output (target) signals to the neural dynamics through the feedback and the non-linear coupling. In addition, during the backward phase, a feedback corrects the error of the network based on the target signal. This error is propagated backward through constant random matrices. This draws some similarity to target propagation , where the gradient of the loss is computed with respect to the output and is propagated backwards to the previous layers of the network. We demonstrate that the proposed model is capable of performing on a variety of different tasks, such as digit (MNIST, 98% test accuracy) and letter classification (eMNIST, 85% test accuracy), logical operation (XOR problem, 100% accuracy), sequence prediction (successful prediction of a sinusoidal wave and Lorentz attractor), and an autoencoder encoding and decoding MNIST data set. The proposed learning scheme can be used in combination with other neural models so that more complex biological phenomena can be studied.
Hinton GE, McClelland JL. In Neural information processing systems, 358–366, 1988.
Xie X, Seung HS. Neural Computation, 15(2), 441–454, 2003.
Lillicrap TP, Cownden D, Tweed DB, Akerman CJ. Nature Communications, 7, 13276, 2016.
Lee DH, Zhang S, Fischer A, Bengio Y. In ECML PKDD, 498–515. Springer, 2015.
Doris Voina1, Stefan Mihalas2, Stefano Recanatesi3, Eric Shea-Brown1
1University of Washington, Department of Applied Mathematics, Seattle, WA, United States; 2Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 3University of Washington, Department of Physiology and Biophysics, Seattle, WA, United States
Correspondence: Doris Voina (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P67
The processing of visual information depends strongly on the statistics of features in the visual field. The visual circuit that processes this information should be both robust to the precise statistics of the visual environment, and flexible to account for a broad variety of visual features. Perhaps the most compelling changes in visual statistics are due to movement, yet how the neural circuitry underlying visual processing accounts for these remains unknown. Allen Brain Observatory data provided by the Allen Institute for Brain Science, along with other studies, suggest that the brain’s response to sensory input is strikingly modulated by locomotion. Specifically, the VIP group of neurons becomes preferentially activated during locomotion and influences multiple synaptic pathways in V1. The goal of this study is to investigate synaptic weights and firing rates of populations of neurons in V1 thought to be responsible for the coding of Gabor-like features and explain how these change when the animal switches behavioral state (from static to running an vice versa). The activity of these neurons is determined not only by their receptive fields, but also by lateral connections which modulate activity due to the surround. VIP neurons have been shown to interact with this circuit in a switch-like fashion, but there is presently no computational model that accounts for the algorithmic consequences of these interactions. We use a Bayesian model previously developed for visual inference in both images and videos (thus emulating what animals would see while moving in the environment). In this model the connection between neurons primarily depends on the co-occurrence probability of features that the neuron responds preferentially to. The differences in the synaptic connectivity computed on images and videos capture the predicted influence of movement on the neural processing of visual information. The model further enables us to propose a role for VIP neurons at the circuit level, and to explain movement dependent changes in the signaling pathways. Finally, the obtained neuronal activity trends can be compared to activities of neurons in mice brains’ during a visual recognition task when the mice are running. As such our results may play a key role in interpreting the high variability seen in V1 activity.
Matthew Farrell1, Stefano Recanatesi2, Eric Shea-Brown1
1University of Washington, Department of Applied Mathematics, Seattle, WA, United States; 2University of Washington, Department of Physiology and Biophysics, Seattle, WA, United States
Correspondence: Matthew Farrell (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P68
Stefano Recanatesi1, Gabriel Ocker2, Eric Shea-Brown3
1University of Washington, Center for Computational Neuroscience, Seattle, WA, United States; 2Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 3University of Washington, Department of Applied Mathematics, Seattle, WA, United States
Correspondence: Stefano Recanatesi (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P69
How does the connectivity of a network system combine with the input that the network receives so to shape the response to the input itself? We approach this question by relating the internal network response to the statistical prevalence of connectivity motifs, a set of surprisingly simple and local statistics on the network topology. The resulting motif description provides a reduced order model of the network dynamics. Through this framework we compute the dimensionality of the response. The dimensionality that we study is tightly link to the number of PCA components that are needed to describe the state of the network. We study this measure at the vary of the connectivity (statistics of motifs) and of the input structure. We find that different network topologies are able to expand or compress the dimensionality and this can be accomplished locally at the single neuron level by increasing or decreasing specific network motifs (e.g. divergent connections). Furthermore, we link these properties to how the network responds to inputs. The total dimensionality of the network response does depend on the input properties and in particular by the strength of the input drive and its dimensionality. The network can then operate in different regimes by compressing the input dimensionality or by matching it being more or less sensitive to the input drive. At last we connect these properties to whether the network is fully excitatory or balanced (considering a balanced network of excitatory and inhibitory neurons). Balanced networks show a variety of behaviors that go beyond the capabilities of fully excitatory systems. We characterize how the dimensionality of such systems varies with connectivity motifs and to input properties. Overall the framework we develop provides powerful theoretical tools to understand the functionality of neural network systems in terms of high level descriptors such as the dimensionality that are linked to neural correlations and neural representation properties.
P70 Action potential propagation in axons: Effect on sodium conductance of collateral and sub-branch distance from soma
Ngwe Sin Phyo, Erin Munro Krull
Beloit College, Department of Mathematics and Computer Science, Beloit, WI, United States
Correspondence: Erin Munro Krull (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P70
Realistic axons are complex. This makes it difficult to predict the behavior during action potential (AP) propagation. It is often assumed that APs successfully propagate down the axon. When previous literature investigated APs in axons, it predicted AP propagation in electronically symmetric axons. Yet, we cannot predict propagation in electronically asymmetric axons. In this study, we looked at the sodium conductance (gNa) of the axon, which determines the axon’s excitability. We initiated APs in a collateral branch and tested if it successfully propagates to the end of the axon. The simplest model we used was a neuron with maximum three collateral branches. We simulated our neurons, studying the threshold gNa required for APs to propagate while varying the distance of the collateral branches along the axon and sub-branches within collateral branches. From this research, we would like to develop a theory to predict AP propagation. That, in turn, we hope will tell us more about how neurons compute. Since the neurons are basic blocks of our nervous system, we also hope this will help future studies to improve treatment for neurological disorders.
P71 Action potential propagation in axons: how sodium conductance can estimate propagation as collateral and sub-branch length vary
Yizhe Tang, Erin Munro Krull
Beloit College, Department of Mathematics and Computer Science, Beloit, WI, United States
Correspondence: Erin Munro Krull (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P71
Predicting when an action potential can propagate in neuronal axons is a long-outstanding problem in both mathematics and neuroscience. Previous related research showed when an axon is electrotonically symmetric, the action potential propagation can be predicted. However, most axons are not electrotonically symmetric. This research uses simulation to provide evidence by looking at a key parameter: the sodium conductance of the axon. We hope to generate a fundamental theory to predict the action potential propagation linearly that can be used for different axon geometries. Predicting action potential propagation may help us better understand neuron computation as well as how disorders may affect computation. For instance, axonal sprouting as seen in epilepsy may hinder propagation. My colleagues and I each looked at different configurations of axons. I tested the case where the length of a sub-branch on a collateral branch varies. I looked at four parameters: the electrotonic length of the sub-branch, the electrotonic length of collateral branch, the distance of the sub-branch from the main axon, and the distance of the parent branch from the soma. We may approximate how the sub-branch affects propagation by looking at different combinations of these four parameters.
P73 The ratio of specialist and generalist neurons in the feature extraction phase determines the odor processing capabilities of the locust olfactory system
Aaron Montero, Jessica Lopez-Hazas, Francisco B Rodriguez
Universidad Autónoma Madrid, Ingeniería Informática, Madrid, Spain
Correspondence: Aaron Montero (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P73
Measuring the processing capabilities of different nervous systems has always been an interesting point for neuroscience. We observed that some capabilities can be measured by the ratio of specialist and generalist neurons that belong to the Kenyon cells (KCs) of the locust olfactory system . These types of neurons are part of the neural diversity of the biological nervous systems, specifically, they represent the heterogeneity in neural response to stimuli. While specialists react to a few stimuli, generalists respond to a wide range of them . Hence, it is suggested that specialist neurons are essential for stimuli discrimination and generalist ones extract common and generic properties from them . The requirement of specialists for pattern recognition was proven by us , but we also observed that sometimes generalists were needed for this task. Thus, there is a certain ratio of these two types of neurons depending on the stimulus complexity . When the input complexity was low, the minimum classification error was achieved with almost any ratio of specialists/generalists (S/G). When this complexity was intermediate, both were required to minimize the classification error, usually in a similar proportion. Finally, when the complexity was high, only specialists were needed for the error minimization. As we linked the complexity level to a S/G ratio and to a classification success , we can invert this relationship to estimate the stimuli complexity and olfactory system accuracy by analyzing the S/G number from neural recordings. Therefore, we used recordings from KCs of the locust for calculating this ratio, based on the neural responses of 43 neurons for 17 different stimuli . We estimate that the percentage of generalists in the KCs of locust is 23.26% . This ratio involves an intermediate complexity of 51.34% according to our calculations, which also provides information about the number of differentiable odors by the locust since complexity and capacity seem to be related . To contrast these results, we measured the complexity of patterns in the projection neurons (PNs) of the antennal lobe, using the recordings of 14 PNs for 3 different odorants. The complexity degree observed for this reduced number of neurons and odors was 63.38% that is not too far from the 51.34% calculated from KCs. This complexity implies that all PNs are generalists, which coincides with the recordings data . Finally, from the two complexity values shown, we can estimate that the reliability of odor discrimination process in the locust could be comprised between 74.87% and 92.04%.
We thank Ramon Huerta for his helpful discussions and Javier Perez-Orive and Gilles Laurent for providing the neural recordings of locust. This research was supported by the Spanish Government projects TIN2014-54580-R and TIN2017-84452-R.
Montero A, Huerta R and Rodriguez FB. Stimulus space complexity determines the ratio of specialist and generalist neurons during pattern recognition. Journal of the Franklin Institute 2018, 355(5), 2951–2977.
Christensen TA. Making scents out of spatial and temporal codes in specialist and generalist olfactory networks. Chem. Senses 2005, 30283–284.
Wilson RI, Turner GC, Laurent G. Transformation of olfactory representations in the Drosophila antennal lobe. Science 2004, 303(5656) 366–370.
Montero A, Huerta R, and Rodriguez FB. Specialist Neurons in Feature Extraction Are Responsible for Pattern Recognition Process in Insect Olfaction. In International Work-Conference on the Interplay Between Natural and Artificial Computation, Springer, Cham, 2015. part I p. 58–67.
Perez-Orive J, Mazor O, Turner GC, Cassenaer S, Wilson RI and Laurent G. Oscillations and sparsening of odor representations in the mushroom body. Science 2002, 297(5580), 359–365.
Rodriguez FB and Huerta R. Techniques for temporal detection of neural sensitivity to external stimulation. Biological cybernetics 2009, 100(4), 289–297.
Rodriguez FB and Huerta R. Analysis of perfect mappings of the stimuli through neural temporal sequences. Neural networks 2004, 17(7), 963–973.
P74 Regulation of neural threshold in Kenyon cells through their sparse condition improves pattern recognition performance
Jessica Lopez-Hazas, Aaron Montero, Francisco B Rodriguez
Universidad Autónoma Madrid, Ingeniería Informática, Madrid, Spain
Correspondence: Aaron Montero (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P74
Insect olfactory system is capable classifying an almost infinite quantity of odorants at different concentrations. This task is carried out in three processing stages, starting at the insect antenna, passing through antennal lobe (AL) and finally the odorants are classified at the mushroom body (MB). The strategies that the system applies in each of these layers to discriminate stimuli have been extensively studied. Regarding the AL and MB, three mechanisms have been proposed as of great importance to assure and improve the success of odorant classification. These strategies are an heterogeneous threshold distribution in the Kenyon Cells (KCs) at the MB [1, 2], a mechanism for gain control at the AL layer  and sparse coding in the KCs  layer to improve pattern differentiation while providing energetic efficiency. In this work, we use a model of the insect olfactory system that takes into account the biological facts about the network architecture and also includes the three strategies explained above. The model is based on neural networks and supervised learning [7, 2] and our goal is to study how information processing takes place in the biological system by testing the relevance of these mechanisms in the energetic cost and the performance of the network on a pattern classification task, paying more attention to threshold distribution and sparse coding, The heterogeneous thresholds are introduced in the model through a learning algorithm that allows the network to find an optimum threshold distribution for KCs for a certain classification problem. Gain control is achieved through the renormalization of patterns in the input layer so that the activation of the neurons is uniform for all patterns. Finally, an activity regulation term is introduced in the supervised algorithm learning rule with the aim of controlling the level of activity in the KCs. The activity regulation term (ART) is defined as1/NKC [SUMKC(yi-s)]2, where the parameter NKC is the number of KCs, s ∈ [0,1] allows to control the level of activity in the KCs layer from no activity with s = 0 to maximum activity with s = 1, and yi is the activation of i-th KC in the network. The results show that a model including the activity regulation term outperforms one that lacks it for the classification problem presented (a simplified version of MNIST dataset ). Also, the model obtains better results when the connection probability between AL and MB neurons is low, in the interval [0.1–0.3], and the sparsity level in KC layer is high, which is consistent with what is observed in the real biological system [5, 6] and assures energetic efficiency.
We thank Ramón Huerta for his useful discussions on this work. This research was supported by the Spanish Government projects TIN2014-54580-R and TIN2017-84452-R.
Montero A, Huerta R, Rodriguez FB. Neurocomputing, 2015, 151, 69–77.
Montero A, Huerta R, Rodríguez FB. Springer, Berlin, Heidelberg 2013, 16–25.
Montero A, Mosqueiro T, Huerta R, Rodriguez FB. Springer, Cham 2017, 317–26.
Olsen SR, Wilson RI. Nature. 2008, 452:956.
Perez-Orive J, Mazor O, Turner GC, Cassenaer S, Wilson RI, Laurent G. Science. 2002, 297, 359–65.
Sanda P, Kee T, Gupta N, Stopfer M, Bazhenov M. 2016, 115, 2303–16.
Huerta R, Nowotny T, García-Sanchez M, Abarbanel HDI, Rabinovich MI. Neural Comput. 2004, 16, 1601–40.
MNIST handwritten digit database [http://yann.lecun.com/exdb/mnist/]
P76 Local excitatory/inhibitory imbalances shape global patterns of activity: A model for desynchronized activity under anesthesia in Alzheimer’s disease
Merav Stern1, Gabriel Ocker2
1University of Washington, Department of Applied Mathematics, Seattle, WA, United States; 2Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Merav Stern (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P76
It has been shown that the highly correlated neural activity known to appear under anesthesia is severely reduced in a mouse model of Alzheimer’s disease (AD) . It has also been shown that AD mice develop a sub-group of silent excitatory neurons alongside with highly hyperactive excitatory neurons  and that the correlated neuronal activity under anesthesia can be restored by enhancing the inhibitory synaptic inputs into the hyperactive excitatory neurons . Taken together, these studies suggest that in AD mice changes in the balance of inhibitory connections to subgroups of excitatory cells shift network-wide activity. We propose a neural network model that explains these phenomenological changes in the overall network behavior. We characterize how a separation between excitatory and inhibitory functional connectivity gives rise to correlated population activity. Our analysis explains why these correlations are disrupted by changes in circuit connectivity.
Our model includes rate-based neuron units that are explicitly separated into excitatory and inhibitory types. Hence, our model connectivity matrix is constrained to have columns with positive entries and columns with negative entries, representing input from excitatory and inhibitory populations accordingly. The eigenvalue spectra of such random matrices have been shown to have outliers  which we further constrain by requiring a tight balance—a sum of excitatory and inhibitory connections into each unit that are matched exactly. This constraint has been shown to remove the outlier eigenvalues  and give rise to highly correlated activity across the network with slow-wave-like dynamic of the mean activity  that resembles activity in wild-type mice.
In addition to this population-wide component of activity, our tightly balanced network model exhibits chaotic fluctuations of single-unit activity around the population mean. We show that the residual activity resembles fully random neural network models but with a time-varying magnitude that depends on the mean activity. The strength of the residual chaotic activity in the tightly balanced network is determined by the variance of the synaptic strengths, while the magnitude of the correlated activity component is determined by the mean strengths of the excitation and inhibition. We model the pathology observed in AD mice by breaking the tight balance between excitation and inhibition within subgroups of excitatory neurons, while maintaining the overall excitation/inhibition balance in the network. This pathology shifts the network into an uncorrelated, chaotic state that resembles the recordings from AD mice.
Busche MA, Kekus M, Adelsberger H et al. Rescue of long-range circuit dysfunction in Alzheimer’s disease models. Nature Neuroscience 2015, 18, 1623–30
Busche MA, Eichoff G, Adelsberger H et al. Clusters of hyperactive neurons near amyloid plaques in a mouse model of Alzheimer’s disease. Science 2008, 321, 1689–9
Tao T, Hu V, Krishnapur M. Random matrices: Universality of ESDs and the circular law. arXive 2010, 38(5), 2023–65
Rajan K, Abbott LF. Eigenvalue spectra of random matrices for neural networks. Phys. Rev. Lett. 2006, 97(18)
Stern MS, Abbott LF. CNS meeting 2016; Takashi H, Fukai T. arXiv 2017; Landau I, Sompolinsky H. COSYNE meeting 2018
Martin Schumann1, Gabriele Scheler2
1Technical University of Munchen, Computer Science, Munich, Germany; 2Carl Correns Foundation for Mathematical Biology, Mountain View, United States
Correspondence: Martin Schumann (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P77
We present a first view of a new type of neuron model currently under development which is significantly different from membrane or action potential based models. The neuron is conceptualized as a unit with an internal processing system G (including the proteome and the nuclear transcription system), an axonal system A, conceptualized as a Boolean vector plus parameters h attached to each vector unit (presynapse), and a dendritic system D (postsynapse) similar to system A but with an evaluation function F. GLIF type models can be recovered by equating G with a sigmoidal activation function, F with a majority rule, and parameters h with synaptic weights. We concentrate on the interaction of G and h under random conditions. A system of neurons of 100–1000 neurons is set up with systemically differing conditions, according to the biological observations on lognormal properties . We load it with patterns which adjust h parameters by Hebbian learning and where the h parameters inform the internal network G to adjust protein expression levels. The internal network G will reach a state where the internal values are read out to adjust h parameters, in this way altering the processing properties of the neurons, both dendritic (postsynaptic) and axonal (presynaptic). So we have an internal storage of previous information that can adjust h parameters at a later time. A neuron may be ready for read-out after a succession of storage events (avalanche model), but different rules also may be used. The system is able to replicate detailed data on neural plasticity (e.g. ). It creates levels of memory for pattern storage and retrieval. The h parameters are able to record an active pattern and to construct a frequentist representation by pre- and postsynaptic connections. The internal G system stores selected features from the h system and rewrites them back to the system. In this way the learned patterns of synaptic connectivity can be adjusted and locally overwritten by the internal storage systems G. Synaptic connectivity overall is adjusted based on those local overwrites through continuing network activity. The G-based overwrite may happen continuously or according to an avalanche model, i.e. rare updates followed by a concerted rewrite of all instances of h values that have differing G system values. We evaluate the system at first by a randomized overwrite for h in order to study the evolution of the system between elimination of overwritten weights and escalation/dominance of these weights. This is the basis for meaningful editing, which allows to process information. The results of the random test runs are used to evaluate storage and processing properties of the combined G/h system. The G read-out extends to the Boolean evaluation functions f. At present, these operate according to majority rules, but they can be edited to include localized cluster supralinear summation or inhibitory veto. The edit of the Boolean evaluation function will be studied separately, also in a randomized fashion. The goal of the system is to perform meaningful pattern memory and abstraction tasks.
Scheler G. Logarithmic distributions prove that intrinsic learning is Hebbian. F1000Research 2017, 6, 1222
Dehorter N, Ciceri G, Bartolini G, et al. Tuning of fast-spiking interneuron properties by an activity-dependent transcriptional switch. Science 2015, 349(6253) 1216–1220
P78 Predictable variability in sensory-evoked responses in the awake brain: optimal readouts and implications for behavior
Audrey Sederberg1, Aurélie Pala1, He Zheng1, Biyu He2, Garrett Stanley1
1Georgia Institute of Technology, Coulter Dept. of Biomedical Engineering, Atlanta, GA, United States; 2Langone Medical Center, New York University, Departments of Neurology, Neuroscience and Physiology, and Radiology, New York, NY, United States
Correspondence: Audrey Sederberg (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P78
Curto C, Sakata S, Marguet S, et al. A Simple Model of Cortical Dynamics Explains Variability and State Dependence of Sensory Responses in Urethane-Anesthetized Auditory Cortex. J Neurosci 2009, 29(34), 10600–10612.
Kelly C, Uddin LQ, Shehzad Z, et al. Broca’s region: linking human brain functional connectivity data and non‐human primate tracing anatomy studies. J Comput Neuro 2010
Potworowski J, Jakuczun W, Leski S, et al. Kernel current source density method. NECO 2012, 24(2), 541–575.
P79 Selectivity and sensitivity of cortical neurons to electric stimulation using ECoG electrode arrays
Pierre Berthet1, Espen Hagen1, Torbjørn V Ness2, Gaute Einevoll2
1University of Oslo, Department of Physics, Oslo, Norway; 2Norwegian University of Life Sciences, Faculty of Science and Technology, Ås, Norway
Correspondence: Pierre Berthet (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P79
High-resolution, non-penetrating devices for direct electric stimulation of sensory cortex have the potential to become neuroprosthetic devices that can compensate for deficits in sight or hearing. The effect of extracellular electric stimulation with such devices, however, has so far not been investigated thoroughly, in particular in context of the variety of stimulation patterns possible with high-density electrode arrays and neuron types. In the context of visual neuroprosthetic devices, the ability to selectively stimulate different groups of neurons to potentially create many different phosphenes (visual impressions), is important.
Here, we combine neuronal modeling and electrostatic volume-conductor theory to investigate the effect of electrical stimulation on the generation of neuronal action potentials. In general a successful stimulation will depend on properties of the neuron like the position, morphology, and membrane properties, as well as the electrical stimulation pattern, i.e., the geometrical arrangement of the stimulating contacts, electric pulse amplitudes and temporal forms, etc. To quantify the stimulus excitability of the neurons we first consider the sensitivity, that is, the minimum stimulation current amplitude (threshold current) needed to generate an action potentials for a particular neuron and stimulation pattern. We also investigate the selectivity, that is, the dependence of the threshold current on the position of the neuron. Biophysically detailed multicompartment models of cortical neurons using the NEURON simulation environment  and LFPy  are used in the simulation. The neurons are assumed to be embedded in an infinite homogeneous, isotropic and ohmic medium. We compute electric potentials as generated by intracranial electroencephalography (ECoG) electrode arrays, and impose these as boundary conditions for the electric potential immediately outside each neuronal compartment. These imposed potentials in turn affect the neuronal dynamics, and the generation of action potentials.We first study stylized morphologies and demonstrate a critical role of their orientation and position relative to the applied electric field, and also of the polarity of the stimulation current [3, 4]. We further investigate the sensitivity and selectivity of morphologically detailed biophysical models, including models from the Allen Brain Institute and the Blue Brain Project, to various configurations of the electrode arrays.
Neural Engineering System Design (NESD) program from the Defense Advanced Research Projects Agency (DARPA).
Carnevale NT, Hines ML. The NEURON Book. Cambridge: Cambridge University Press, 2006.
Lindén H, Hagen E, Łęski S, et al. LFPy: a tool for biophysical simulation of extracellular potentials generated by detailed model neurons. Front. Neuroinform. 2014, 7(1), 1–15.
Rattay F, Modeling the excitation of fibers under surface electrodes. IEEE Trans. Biomed. Eng. 1988, 35(3), 199–202.
Rattay F. The basic mechanism for the electrical stimulation of the nervous system. Neuroscience 1999, 89(2), 335–346.
P80 Patterns of gastrointestinal motility and the effects of temperature and menthol: A modelling approach
Parker Ellingson1, Taylor Kahl1, Sarah Johnson1, Natalia Maksymchuk1, Sergiy Korogod2, Chun Jiang3, Gennady Cymbalyuk1
1Georgia State University, Neuroscience Institute, Atlanta, GA, United States; 2Bogomoletz Institute of Physiology, National Academy of Sciences of Ukraine, Kiev, Ukraine; 3Georgia State University, Department of Biology, Atlanta, GA, United States
Correspondence: Parker Ellingson (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P80
Proper digestive functioning requires a variety of coordinated activities in the gastrointestinal tract. During the starved state, peristaltic waves are the dominant pattern of motility in the small intestine. After a meal, the intestine switches to a mixing pattern reminiscent of the beating pattern produced by interacting oscillators with different frequencies. This helps to break up food particles and increase nutrient absorption. These patterns are also modulated by temperature and a variety of pharmacological agents. Rhythm-generating cells electrically connected to the smooth muscle known as interstitial Cells of Cajal (ICC) drive the patterns of motility. The mixing pattern is particularly dynamically interesting and is generally explained as an interaction of two classes of ICC, oscillating at different frequencies (1). We suggest that intrinsic dynamics of a single ICC can explain the mixing pattern as well. We developed models of ICC contains intracellular calcium dynamics, and Hodgkin-Huxley representations of key ionic currents. Both the endoplasmic reticulum and mitochondria are intracellular calcium stores that could produce calcium oscillations with different periods in the cytosol. We used a mathematical model of subcellular dynamics (4) to observe interactions between calcium oscillations from the ER and mitochondria. Variation in the concentration of one organelle can control the period of oscillation through the other organelle. A combination of two of the subcellular models, assuming weak diffusive coupling, produced a beating pattern. We compared the results of this model to our experimental recordings of muscle contractions from murine small intestines. Our model suggests a mechanism for this mixing pattern: interactions between two oscillatory calcium subsystems in a single ICC. We also investigated the effects of temperature on motility patterns by adjusting Q10 values and incorporating the dynamics of TRPA1 channels into our model. These results explain how temperature can affect the frequency of oscillations, which is consistent with experimental data (3). As the TRPA1 channel is also sensitive to menthol, we show that our model reproduces experimental data on menthol treatment of ICC (2). This model shows that factors affecting the internal calcium dynamics impact the period of oscillations, while factors which affect membrane based currents primarily affect amplitude In conclusion, we demonstrate that ICC are capable of producing a variety of basic regimes of activity corresponding to key motility patterns.
This project was supported by the GSU Brains and Behavior program
Huizinga JD et. al. The Origin of Segmentation Motor Activity in the Intestine. Nature Comm 2014. 5, 3326.
Kim HJ, Wie J, So I, Jung MH, Ha KT, Kim BJ. Menthol Modulates Pacemaker Potentials through TRPA1 Channels in Cultured Interstitial Cells of Cajal from Murine Small Intestine. Cell Physiol Biochem. 2016, 38(5), 1869–82
Kito Y, Suzuki H. Properties of pacemaker potentials recorded from myenteric interstitial cells of Cajal distributed in the mouse small intestine. J Physiol. 2003, 552, 803–818.
Marhl, M, Haberichter, T, Brumen, M, Heinrich, R. Complex calcium oscillations and the role of mitochondria and cytosolic proteins. Biosystems 2003. 57, 75–86.
P81 Mechanisms underlying locomotion and paw-shaking rhythms in cat multifunctional central pattern generator
Jessica Green1, Boris Prilutsky2, Gennady Cymbalyuk1
1Georgia State University, Neuroscience Institute, Douglasville, GA, United States; 2Georgia Institute of Technology, Department of Biology, Atlanta, GA, United States
Correspondence: Jessica Green (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P81
Could two drastically different rhythms such as in cat locomotion and paw-shaking be controlled by the same network of neurons? To answer this question, we built a model of a multifunctional central pattern generator (CPG). Our model, constructed as a half-center oscillator (HCO), is able to produce multistability of a locomotion-like rhythm and a paw-shake-like rhythm. It uses a novel mechanism involving two slow currents, i.e. slowly inactivating calcium current and slowly inactivating sodium current . Transient paw-shake-like activity can be elicited in our model, and this transient activity exhibits asymmetric trends throughout consecutive bursts in accordance with experimental data. Here, our model has only the locomotion-like rhythm present, and generates only transient paw-shake-like activity. We investigated the model’s responses to various types of afferent stimulation during locomotion-like activity and transient paw-shake-like activity. We predict that applying a 1-second pulse of current to groups Ia and II afferents from cat hip flexors and extensors during locomotion, which have access to the flexor- and extensor half-centers of CPG rhythm generator , will evoke a paw-shake response in that hindlimb. According to our model, the duration of this transience depends on the phase of stimulation in the locomotion rhythm. Also, the duration of transience increases with the duration of the pulse. The duration of transient paw-shake-like activity could be extended when a short pulse of current is applied during transient paw-shake-like bursting. We predict that applying a short 20-millisecond pulse of excitatory current to groups Ia and II afferents from either hip flexors or extensors during a paw-shake response will extend the duration of the paw-shake response. Furthermore, the duration of the paw-shake response would increase as the duration of this stimulus increases until some threshold duration is reached at which the duration of the paw-shake response will remain roughly constant as the stimulus duration increases. In addition, the extension of the response would depend on the phase of pulse application in the paw-shaking cycle. The extended paw-shake response could last longer if the pulse is applied during the extensor phase as opposed to the flexor phase if the pulse is applied near the beginning of the paw-shake response. The asymmetry weakens if the pulse onset is delayed during the paw-shake response. These predictions are robust and can be tested experimentally to investigate whether the obtained responses during locomotion and paw-shaking are consistent with the idea that the two rhythmic behaviors are generated by the same multifunctional CPG. Confirming these predictions experimentally would provide strong evidence for the hypothesis that the paw-shake response in cats is generated as a transient response of the locomotion CPG.
We acknowledge support by the Brains and Behavior Fellowship for Jessica Parker at Georgia State University and by NIH P.01 HD32571, R01 EB012855, and R01 NS100928 to Boris I. Prilutsky.
Bondy B, Klishko AN, Edwards DH, Prilutsky BI, Cymbalyuk G: Control of cat walking and paw-shake by a multifunctional central pattern generator. In: Neuromechanical Modeling of Posture and Locomotion. edn. New York: Springer; 2016, 333–359.
McCrea DA, Rybak IA. Organization of mammalian locomotor rhythm and pattern generation. Brain Res Rev 2008, 57, 134–146.
P82 The role of Na+/K+ pump in intrinsic intermittent bursting dynamics in model neuron of the Pre-Bötzinger Complex
Alex Vargas, Gennady Cymbalyuk
Georgia State University, Neuroscience Institute, Atlanta, GA, United States
Correspondence: Gennady Cymbalyuk (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P82
Central Pattern Generators (CPGs) are oscillatory neuronal circuits controlling rhythmic movements . Movements like breathing have to be continually regulated for an animal to meet environmental demands [2–4]. The Pre-Bötzinger Complex (PBC), located in the medulla of the brainstem, produces patterns controlling inspiratory phase of breathing. Prolonged hypoxia leads to dysrhythmia in this CPG, causing apnea or cessation of breathing. We focus our study on potential role of Na+/K+ pump in intermittent intrinsic patterns which we discovered in a model of a Pre-Bötzinger Complex neuron. Our hypothesis is that these patterns are similar to intermittent patterns of tadpole swimming . The major function of the Na+/K+ pump is to maintain the ion gradients of Na+ and K+ in a 3:2 exchange ratio, consuming one ATP molecule per cycle. The pump is electrogenic and activity dependent, it directly contributes to neuronal dynamics across entire voltage range of operation. By serving the function of maintaining the ionic gradients and contributing to neuronal dynamics, the pump presents advantages and pathological risks. We developed a model of a PBC neuron which is intrinsically bursting based on a persistent sodium current dynamics. The model describes dynamic intracellular Na+ concentration which determines the reversal potential for all sodium currents. Fast sodium, persistent sodium, delayed-rectifier potassium, slow calcium, leak, h-current and the pump contribute to membrane potential. The pump current is controlled by a parameter for maximal pump strength which reflects ATP levels within the cell. The higher value corresponds to more ATP available for the pump to consume. We demonstrate that our model produces functional bursting under normoxia and that the decrease of the pump strength corresponding to hypoxia generates intermittent bursting. Shifting K+ reversal potential (EK) drastically affected the interbout interval of the intermittent activity, the more hyperpolarized, the longer the interbout interval. The more depolarized EK is, the less of an increase in the maximal pump strength is necessary to restore functional activity. We investigated dynamical mechanisms underlying the role of the Na+/K+ pump. We find that this carries significance towards further understanding pathological vulnerabilites in the respiratory centers of the brain.
Marder, E. and R.L. Calabrese, Principles of rhythmic motor pattern generation. Physiological reviews, 1996. 76(3): p. 687–717.
Tryba, A.K., et al., Differential modulation of neural network and pacemaker activity underlying eupnea and sigh-breathing activities. Journal of Neurophysiology, 2008. 99(5): p. 2114–25.
Koizumi, H. and J.C. Smith, Persistent Na+ and K+ -dominated leak currents contribute to respiratory rhythm generation in the pre-Botzinger complex in vitro. The Journal of Neuroscience : the official journal of the Society for Neuroscience, 2008. 28(7): p. 1773–85.
Bell, H.J. and N.I. Syed, Hypoxia-induced modulation of the respiratory CPG. Frontiers in Bioscience, 2009. 14: p. 3825–3835.
Zhang, H.Y. and K.T. Sillar, Short-term memory of motor network performance via activity-dependent potentiation of Na+/K+ pump function. Current Biology: CB, 2012. 22(6): p. 526–31.
P83 Changes in relaxation time predict stimulus-induced reduction of variability at the single-cell level
Luca Mazzucato1, Ahmad Jezzini2, Alfredo Fontanini3, Giancarlo La Camera3, Gianluigi Mongillo4
1Columbia University, Center for Theoretical Neuroscience, New York, NY, United States; 2Washington University, Department of Neuroscience, St. Louis, WA, United States; 3Stony Brook University, Department of Neurobiology and Behavior, Stony Brook, NY, United States; 4Université Paris Descartes, Centre de Neurophysique, Physiologie et Pathologie, paris, France
Correspondence: Luca Mazzucato (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P83
It has been reported that stimulus presentation reduces the level of neuronal variability. The mechanism underlying this phenomenon, however, is yet to be elucidated. Here, we present evidence suggesting that changes in trial-to-trial variability are determined by changes in single-neuron relaxation times. We estimated non-parametrically the single-cell autocorrelation (AC) times during spontaneous and stimulus-evoked activity in the cortex (gustatory and pre-frontal) and the medio-dorsal thalamus of alert rats. We found broad distributions of AC times in all areas, ranging from less than 20 ms to more than 4 s (our largest observation window); their distributions were right-skewed and long-tailed. We found that single-cell AC times changed between the two conditions: neurons with spontaneous slow AC times became fast after stimulus presentation, and vice versa. We uncovered a relationship between changes in AC times (between spontaneous and evoked conditions) and stimulus-induced changes in trial-to-trial variability, at the single-neuron level. While the overall Fano factor dropped during evoked periods compared to spontaneous periods in all areas, consistent with previous reports, we found that such reduction was entirely driven by the subpopulation of neurons whose AC times was also reduced by the stimulus. Changes in AC time between spontaneous and evoked condition thus predict the observed changes in trial-to-trial variability at the single-cell level. These results suggest that local circuit dynamics in both cortex and thalamus evolves through sequences of metastable states, where state durations are modulated by stimulus presentation.
Martin Zapotocky1, Stepan Kortus1, Govindan Dayanithi2
1Czech Academy of Sciences, Institute of Physiology, Prague, Czechia; 2Czech Academy of Sciences, Institute of Experimental Medicine, Prague, Czechia
Correspondence: Martin Zapotocky (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P84
Calcium signaling in neurons is typically initiated by Ca2+ influx through voltage-gated channels in the plasma membrane. In many neuronal types, however, it has been shown that the resulting increase of cytosolic [Ca2+] can be significantly modulated by a release/uptake of Ca2+ by intracellular stores. A protocol commonly used toanalyzesuch modulation consists of depolarizing the membrane by exposure to a high-K+ pulse and recording the resulting transient [Ca2+] response, either in control conditions or in the presence of drugs that activate/inhibit Ca2+ fluxes arising from specific intracellular stores. The detailed time course of these fluxes, however, is rarely analyzed. We have developed a combined experimental and computational method that permits to separate the principal contributing fluxes and to extract their time courses. We applied this method to freshly isolated magnocellular neurons from the rat supraoptic nucleus, with [Ca2+] kinetics recorded using Fura-2 based ratiometric imaging. Wemodeledthe [Ca2+] kinetics as resulting from depolarization-induced Ca2+ entry, Ca2+ clearance by pumps and exchangers at the plasma membrane, Ca2+ release from the endoplasmic reticulum (ER), and Ca2+ uptake by the ER. The clearance rate function was identified from experiments in which the ER fluxes were blocked. We show that in response to a series of depolarization steps, the [Ca2+] elevation can be either potentiated or attenuated, depending on the filling state of the ER. We identify the time course of the calcium-induced-calcium-release flux mediating the potentiation and of the ER re-uptake flux mediating the response attenuation. The principal functional role of the magnocellular neurons consists in the release of hormones arginine-vasopressin or oxytocin, in response to physiological stimuli. Weanalyzethe role that the usage-dependent potentiation/attenuation of the [Ca2+] response may play in the patterning of action potential bursts, which in turn control the release of vasopressin from the nerve terminals into the bloodstream.
Abhishek De, Gregory D. Horwitz
University of Washington, Department of Physiology and Biophysics, Seattle, WA, United States
Correspondence: Abhishek De (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P85
The color of a light depends on surrounding lights. This effect is likely mediated, at least in part, by double opponent (DO) neurons in area V1. DO neurons have two characteristic properties: they are cone opponent and they have opposite color preferences in different parts of their spatial receptive field (RF). As a result, DO neurons respond maximally to color boundaries and weakly to full-field color stimuli. How these neurons integrate color signals across their RFs, however, is not well understood. For this reason, physiological and psychophysical spatial color processing are difficult to relate quantitatively. We identified V1 DO neurons in awake behaving monkeys using spike triggered averaging. We presented stimuli that activated non-overlapping regions of the RF individually or simultaneously. Using an adaptive closed-loop stimulus generator, we identified stimuli that drove the same neuronal response but differed in how strongly they activated two regions of the RF. We encountered two classes of DO neurons that were selective for either blue-yellow or red-green edges. Almost all blue-yellow and some red-green DO neurons responded to a weighted sum of color signals from the two non-overlapping regions of their RFs. Consequently, these neurons responded to chromatic contrast between the two regions of their RFs irrespective of the absolute chromaticities that defined the edge. For example, a blue-yellow DO neuron responded identically to a blue-yellow edge and to an edge between a saturated and an unsaturated blue (or yellow). A subset of red-green DO neurons combined color signals across their RFs nonlinearly. This nonlinearity may be due to complex interactions between cone opponent and cone non-opponent signals across space that have previously been identified with spike triggered covariance analysis.
Jiyoung Kang, Kyesam Jung, Hae-Jeong Park
Yonsei University, College of Medicine, Seoul, Korea, Republic of
Correspondence: Hae-Jeong Park (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P86
Voltage-sensitive dye imaging (VSDI), which is an important neurophysiological technique to investigate the dynamics of the brain, can extract changes in the membrane potentials of large neural populations with high spatial (20 ~ 50 m) and temporal resolution (1 ~ 2 ms). The VSDI signal reflect changes in the neural population activities. However, the VSDI signal itself does not reflect connectivity across neural populations. Thus, a computational analysis is essential to estimate effective connectivity among neural populations reflected in the VSDI data. In the present study, adopting the dynamic causal modeling (DCM) , we developed a novel framework for effective connectivity analysis of VSDI data; VSDI-DCM. The VSDI-DCM consists of two parts; hidden neural state and VSDI observation models, which describe dynamics of the activity of neural population and transformation from hidden neural states to the VSDI signals, respectively. All model parameters in both of neural and observation models are simultaneously estimated to minimize prediction errors with observed VSDI data by Bayesian inferences. We analyzed VSDI data of the hippocampal slices of mice, downloaded from . In this experiment, the temporoammonic pathway was stimulated four times with 100 ms intervals. For the for first stimulus, hyperpolarization after the stimulation was observed in CA1 region, but this inhibition was reduced for the latter stimuli. We extracted VSDI signals at the Hilus, CA1, and CA3 regions of the hippocampus. For the neural state model among these three regions, we employed a Jansen and Rit model , with three sub-populations (two excitatory and one inhibitory neural populations) for each region and three types of directional interactions between pairs of regions. We further added memory term to describe adaptive properties of the neural spikes. We used linear combinations of three sub-populations for observation model of VSDI signals. As a result, VSDI-DCM successfully fits VSDI signals in both wild type mouse and the epileptic Arx conditional knock-out mutant mouse. In the mutant mouse, hyperpolarization did not decrease for the consecutive stimuli. We found that adaptive parameters of the VSDI-DCM play an essential role for differentiate responses in the mutant from those of the wild type. We believe that VSDI-DCM could be used for the investigation of the mesoscale brain dynamics.
This research was supported by Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2017M3C7A1049051).
Friston KJ, et al. Dynamic causal modelling. Neuroimage 2003, 19(4): 1273–1302.
Bourgeois EB, et al. A toolbox for spatiotemporal analysis of voltage-sensitive dye imaging data in brain slices. Plos One 2014, 9(9): e108686.
Jansen BH Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern 1995, 73(4): 357–366.
P87 Estimation of effective connectivity in the microcircuits of the mouse barrel cortex using dynamic causal modeling of calcium imaging
Kyesam Jung, Jiyoung Kang, Hae-Jeong Park
Yonsei University, College of Medicine, Seoul, Korea, Republic of
Correspondence: Hae-Jeong Park (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P87
The computational modeling of the cerebral cortex may be useful to unravel the encoding mechanism for the given stimuli in the form of effective connectivity. The purpose of this study is to estimate effective connectivity among neuronal populations of the mouse barrel cortex using calcium imaging functional data.
This research was supported by Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2017M3C7A1049051).
Peron, S. et al. Calcium imaging data from vibrissal S1 neurons in adult mice performing a pole localization task. CRCNS.org 2014.
Friston KJ, et al. Dynamic causal modelling. Neuroimage 2003 19(4): 1273–1302.
Jansen BH, Rit VG. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern 1995, 73(4): 357–366.
Alexander Bird1, Lisa Deters1, Hermann Cuntz2
1Frankfurt Institute for Advanced Studies (FIAS), Computational Neuroanatomy, Frankfurt am Main, Germany; 2Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt/Main, Germany
Correspondence: Alexander Bird (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P88
The functionality of the brain depends fundamentally on the connectivity of its neurons for everything from the propagation of afferent signals to computation and memory retention. Connectivity arises from the apposition of complex branched axonal and dendritic arbors which each display a diverse array of forms, both within and between neuronal classes. Despite this complexity, neurons of different classes have been observed to form synapses in highly specific ways , leading to potentially highly structured connectivity motifs within neuronal networks. Whilst the large-scale EM studies necessary to definitively constrain synaptic connectivity remain prohibitively slow and viral synaptic tracing is limited to small neuron numbers , putative synaptic locations from the close juxtaposition of dendrite and axon are more readily measured  and provide the potential set of all possible synaptic contacts; the backbone upon which neuronal activity can fine tune connectivity. It has been shown that much of the specificity in putative connectivity can be explained by a detailed analysis of the statistical overlap of different axonal and dendritic arbors  in a manner analogous to Peters’ rule where synapses are assumed to form uniformly where possible . However such analyses rely on full neuronal reconstructions with large numbers of parameters and are difficult to apply intuitively to microcircuits. We have investigated the number of putative synapses that form between artificial arbors generated using a generalised minimum-spanning tree algorithm that mirrors the structure of real neurons . We have found that the number of putative synaptic contacts depends linearly on just four properties of the arbors: the volume of the region where dendrite and axon overlap, the length of the axonal and dendritic arbors within this region, and the maximum dendritic spine length at which synaptic contacts can form. The relationship between these four parameters and the estimated synapse number can be expressed as a single equation (adapted from results in  and ) and accurately models the number of putative synapses between reconstructed cortical neurons . We have additionally shown that this relationship is specific to typical dendritic and axonal structures as morphologies that resemble knock-out mutants with pathologically clustered dendrites do not fit our predictions. Other deviations from the predictions of our study could provide insights into the degree of targeting in neurite growth processes in different brain regions as more detailed connectome data becomes available. Overall our work provides an intuitive way to estimate the putative synaptic connectivity of microcircuits, greatly simplifying the parameters necessary for analytical and numerical studies of biophysically detailed neuronal networks.
Xiang X, Shen S, Cadwell CR, et al. Principles of connectivity among morphologically defined cell types in adult neocortex. Science (New York, NY). 2015, 350(6264):aac9462. https://doi.org/10.1126/science.aac9462.
Helmstaedter, M. Cellular-resolution connectomics: challenges of dense neural circuit reconstruction. Nature methods 2013, 10(6), 501.
Wall NR, De La Parra M, Callaway EM, Kreitzer AC. Differential innervation of direct- and indirect-pathway striatal projection neurons. Neuron 2013, 79(2):347–360. https://doi.org/10.1016/j.neuron.2013.05.014.
Markram H, Lübke J, Frotscher M, Roth A, Sakmann B. Physiology and anatomy of synaptic connections between thick tufted pyramidal neurones in the developing rat neocortex. The Journal of Physiology 1997, 500(Pt 2):409–440.
Hill SL, Wang Y, Riachi I, Schürmann F, Markram H. Statistical connectivity provides a sufficient foundation for specific functional connectivity in neocortical neural microcircuits. Proceedings of the National Academy of Sciences of the United States of America 2012, 109(42):E2885-E2894. https://doi.org/10.1073/pnas.1202128109.
Braitenberg, V., & Schüz, A. (2013). Cortex: statistics and geometry of neuronal connectivity. Springer Science & Business Media.
Cuntz H, Forstner F, Borst A, Häusser M. One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Practical Application. Morrison A, ed. PLoS Computational Biology 2010, 6(8):e1000877. https://doi.org/10.1371/journal.pcbi.1000877.
Liley, D. T., & Wright, J. J. Intracortical connectivity of pyramidal and stellate cells: estimates of synaptic densities and coupling symmetry. Network: Computation in Neural Systems 1994, 5(2), 175–189.
Chklovskii, D. B. Synaptic connectivity and neuronal morphology: two sides of the same coin. Neuron 2004, 43(5), 609–617.
Ascoli, G. A., Donohue, D. E., & Halavi, M. NeuroMorpho. Org: a central resource for neuronal morphologies. Journal of Neuroscience 2007, 27(35), 9247–9251.
Marvin Weigand1, Hermann Cuntz2
1Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany; 2Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt/Main, Germany
Correspondence: Marvin Weigand (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P89
The concept of a hypercolumn is used to subdivide discriminable patterns of continuously shifting feature preferences into discrete topographical units [1–3]. In the visual cortex such hypercolumns consist for example of repeating pinwheel patterns [4–6] and seem to follow a universal design principle across mammalian species  because the number of pinwheels per hypercolumn area is constant near pi . We find using curated biological data that this constant relationship is a general consequence of a fixed number of neurons per hypercolumn and that differences in absolute pinwheel densities are a mere consequence of differences in the neuronal density. Low neuronal densities would therefore result in large hypercolumns and vice versa. In agreement with previous results , our analysis of the characteristic orientation preference hypercolumns in the primary visual cortex yields a constant number of ~ 30,000 neurons per pinwheel and defines a minimum of ~ 300 pinwheels below which organisms lack hypercolumns altogether. Using a computational model based on optimal wiring principles we confirm our empirical results by showing that similarly structured hypercolumns appear with fixed cell numbers independently of the overall network size. Furthermore we show that a fixed hypercolumn size is compatible with the absence of hypercolumns in rodent species. Overall, our results provide further evidence for a universal design principle in the visual cortex across mammalian species.
Mountcastle, V. B. The columnar organization of the neocortex. Brain a J. Neurol.1997, 120, 701–722.
Kaas, J. H. Evolution of columns, modules, and domains in the neocortex of primates.Proc. Natl. Acad. Sci.2012 109, 10655–10660.
Horton JC, Adams DL. The cortical column: a structure without a function.Philos. Trans. R. Soc. B Biol. Sci. 2005, 360, 837–862.
Hubel DH, Wiesel TN. Sequence regularity and geometry of orientation columns in the monkey striate cortex. J. Comp. Neurol.1974, 158, 267–293.
Blasdel GG, Salama G. Voltage-sensitive dyes reveal a modular organization in monkey striate cortex. Nature 1986, 321, 579–585.
Ohki K,.et al. Highly ordered arrangement of single neurons in orientation pinwheels. Nature 2006, 442, 925–928.
Weigand M, Sartori F, Cuntz H. Universal transition from unstructured to structured neural maps.Proc. Natl. Acad. Sci. 2017, 114, E4057–E4064.
Kaschube M, et al. Universality in the evolution of orientation columns in the visual cortex. Science 2010, 330, 1113–1116.
Srinivasan S, Carlo CN, Stevens CF. Predicting visual acuity from the structure of visual cortex. Proc. Natl. Acad. Sci. 2015, 112, 7815–7820.
Felix Effenberger, Hermann Cuntz
Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt/Main, Germany
Correspondence: Hermann Cuntz (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P90
In neural circuits, neurons send out tree-shaped dendritic structures to collect inputs from their presynaptic partners. Different cell types are visually identifiable by the characteristic shapes of their dendrites [4, 5] and these also critically affect their respective computations. A large number of branching statistics have been proposed as objective criteria to capture differences between cell types and to distinguish disease or mutation phenotypes . Yet, as we show here, most of those widely used statistics show trivial correlations that are essentially entirely explained by optimal wiring considerations , consistent with their poor power for sorting dendritic tree shapes into their respective cell types . Using a simple maximum entropy model based on minimum spanning trees, we were able to reproduce almost all relationships between the commonly used branching statistics. To verify our model we studied a large set of real dendritic trees, covering a multitude of different cell types, species, developmental stages and brain regions . Our study not only gives a comprehensive overview of all commonly used statistics and emphasizes the need for more powerful branching statistics, but more generally indicates a potential randomness of dendritic arborizations in the brain constrained only by optimal wiring considerations and the space they innervate. The model we propose can furthermore serve as a basis to test the power of yet to be invented branching statistics and is also likely useful to study other branching structures found in nature, such as river networks, botanical trees, and blood vessel structures.
Scorcioni R, Polavaram S, Ascoli GA. L-Measure: a web-accessible tool for the analysis, comparison and search of digital reconstructions of neuronal morphologies. Nature Protocols 2008, 3(5), 866.
Polavaram S, Gillette TA, Parekh R, Ascoli GA. Statistical analysis and data mining of digital reconstructions of dendritic morphologies. Frontiers in Neuroanatomy 2014, 8, 138.
Ascoli GA, Donohue DE, Halavi M. NeuroMorpho. Org: a central resource for neuronal morphologies. Journal of Neuroscience 2007, 27(35), 9247.
Ascoli GA, Alonso-Nanclares L, Anderson SA, et al. Petilla terminology: nomenclature of features of GABAergic interneurons of the cerebral cortex. Nature Reviews Neuroscience 2008, 9(7), 557.
Cuntz H, Forstner F, Haag J, Borst A. The morphological identity of insect dendrites. PLoS Computational Biology 2008, 4(12), e1000251.
Cuntz H, Forstner F, Borst A, Häusser M. One rule to grow them all: a general theory of neuronal branching and its practical application. PLoS Computational Biology 2010, 6(8), e1000877.
P91 Dissecting the structure and function relationship in Drosophila dendrite development with the help of computational modelling
André Castro1,2, Lothar Baltruschat3, Tomke Stuerner3, Gaia Tavosanis3, Hermann Cuntz4
1Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Portugal; 2Frankfurt Institute for Advanced Studies (FIAS); 3German Center for Neurodegenerative Diseases (DZNE), Dendrite Differentiation Unit, Bonn, Germany; 4Frankfurt Institute for Advanced Studies (FIAS) & Ernst Strüngmann Institute (ESI), Computational Neuroanatomy, Frankfurt/Main, Germany
Correspondence: André Castro (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P91
Dendritic growth is the process that ultimately leads to cell type specific neuronal morphologies and contributes to building mature neural circuits, shaping their computational properties. Dendritic tree morphology is strongly constrained by optimal wiring considerationsand by functional properties relevant to behaviour. However, the rules controlling the fine regulation of branch outgrowth, pruning and stabilisation that leads to the mature arbour elaboration remain largely unknown. In this work we study the growth phases of ventral Class I dendritic arborisation (da) neurons of the Drosophila melanogaster larva peripheral nervous system at a high temporal resolution that allows resolving the fine elements that compose the growth process. The Class I da neurons, which are proprioceptive and respond to contractions in the larva body during crawling, do not obviously gain from satisfying optimal wiring constraints. Therefore, we use this system to study how their specific functional requirements may be combined with optimal wiring constraints during the developmental growth process that leads to the dendritic morphologies of these cells. Genetic manipulation of the sensory neuron’s shape interferes with their sensory function and disrupts crawling behaviour, suggesting that the feedback of information about body movement depends on precise dendritic morphology. Hence, we probed the contribution of the class I ventral cell’s dendrite characteristic comb-like geometry in sampling the mechanosensory inputs arising from the contraction of body wall during crawling behaviour, by recording high resolution calcium imaging in freely crawling larvae. Using these recordings, we show strong correlations between calcium signal change in the deformed comb-like dendritic branches caused by the contraction of the body wall in a series of periodic strides during forward and reverse crawling. We then utilized genetically encoded green fluorescent protein markers for ventral Class I da cells, and recorded high temporal resolution non-invasive, in vivo time-lapse microscopy images of dendrite arbour morphogenesis in the embryo and its maturation in the larva. The time-lapse data enabled us to constrain computational growth models that clearly defined the different development stages of dendritic pattern formation. Furthermore, they revealed how this individual type of neurons controls branching to achieve its mature shape while respecting minimal wiring constraints. Our findings unveil how single neurons can develop specialised dendrite patterns that support a well-defined function while minimizing wiring costs associated with their dendritic trees, shedding light on general principles of structure–function emergence in single neurons.
P92 Dimensionality reduction of brain signals of rats by Spectral Principal Component Analysis (SPCA)
Altyn Zhelambayeva1, Hernando Ombao2
1Nazarbayev University, Department of Computer Science & Biological Sciences, Astana, Kazakhstan; 2King Abdullah University of Science and Technology, Statistics Program, Thuwal, Saudi Arabia
Correspondence: Altyn Zhelambayeva (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P92
P94 Advancing computational studies of the nervous system: Publishing models not paper descriptions of models
James Bower1, David Beeman2, Hugo Cornelis3
1Southern Oregon University, Department of Biology, Ashland, OR, United States; 2University of Colorado, Department of Electrical, Computer and Energy Engineering, boulder, CO, United States; 3Neurospaces Development GCV, Martelarenlaan, Belgium
Correspondence: James Bower (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P94
Abed Ghanbari1, Naixin Ren2, Christian Keine3, Carl Stoelzel2, Bernhard Englitz4, Harvey Swadlow2, Ian H. Stevenson2
1University of Connecticut, Department of Biomedical Engineering, Storrs, CT, United States; 2University of Connecticut, Department of Psychological Sciences, Storrs, CT, United States; 3Carver College of Medicine & University of Iowa, Department of Anatomy and Cell Biology, IA, United States; 4Radboud University & Donders Institute for Brain, Cognition and Behaviour & Department of Neurophysiology, Netherlands
Correspondence: Abed Ghanbari (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P95
Short-term synaptic plasticity (STP) causes the effect of presynaptic spikes on a postsynaptic neuron to vary on timescales ranging from a few milliseconds to a few seconds. STP has been extensively studied in vitro by stimulating a presynaptic input with pulses of different frequencies and observing depression or facilitation in the postsynaptic potentials or currents. These studies have shown that the type and timescale of STP varies by cell type and brain region. However, since recording postsynaptic potentials (PSP) or currents (PSC) in vivo is challenging, STP has not been fully characterized in awake, behaving animals. Here rather than observing PSP/PSCs directly, we model how presynaptic spikes alter postsynaptic spiking and infer STP parameters from spike observations alone. In particular, we model the short-term changes in the probability of a postsynaptic spike following a presynaptic spike—the synaptic efficacy. Previous work has argued that, in depressing synapses, this probability or efficacy is larger when presynaptic spikes are preceded by long interspike intervals (ISIs), and in facilitating synapses efficacy is larger for short intervals. However, in practice, the observed correlation between pre- and postsynaptic spiking is a mixture of multiple underlying phenomena. Here we develop a model-based approach for decomposing these short-term changes into four components: (1) short-term synaptic plasticity, (2) integration of PSPs, (3) history effects, and (4) slow common inputs. The observed spike probability depends on each of these factors as well as the synaptic strength itself and the distribution of presynaptic spike times. We developed an extension of a typical generalized linear model (GLM) to use only pre- and postsynaptic spike observations. This method allows us to characterize short-term synaptic dynamics of a wide range of synaptic behaviors in vivo. The estimated synaptic parameters as well as plasticity parameters could be compared with in vitro measurements. To validate our model, we examined its performance for four putative synapses using only pre- and postsynaptic spike observations. We find that lateral-geniculate nucleus-to-visual cortex (LGN-V1) data is consistent with short-term synaptic depression where postsynaptic spike probability increases at long presynaptic ISIs (which allow for recovery from the depression). Data from the auditory nerve-to-spherical bushy cells (ANF-SBC) synapses, on the other hand, is consistent with short-term synaptic facilitation, and spike history causes decreased postsynaptic spiking at short presynaptic ISIs. There is a wide range of efficacy patterns in the multi-electrode hippocampus (HC) data, but, in many cases, common input from theta oscillations has an impact on the observed efficacy. Lastly, a pair within thalamus shows depressing pattern similar to LGN-V1 connection with stronger integration. These results demonstrate how short-term synaptic efficacy reflects a combination of many factors, and interactions between these factors give rise to a wide diversity of effects of presynaptic spikes on postsynaptic spiking. As the number of simultaneously recorded neurons increases, this approach is likely to be useful for characterizing STP in multi-electrode array recordings as well as studying how differences in STP affect postsynaptic spiking.
Michael Beyeler1, Ariel Rokem1, Devyani Nanduri2, James D. Weiland3, Geoffrey M. Boynton4, Ione Fine4
1University of Washington, eScience Institute, Seattle, WA, United States; 2University of Southern California, Biomedical Engineering, Los Angeles, CA, CA, United States; 3University of Michigan, Biomedical Engineering, Ann Arbor, MI, MI, United States; 4University of Washington, Psychology, Seattle, WA, WA, United States
Correspondence: Michael Beyeler (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P96
By 2020 roughly 200 million people worldwide will suffer from degenerative retinal diseases. While a variety of sight restoration technologies are being developed, retinal neuroprostheses (‘bionic eyes’) are the only devices with FDA approval. These devices aim to restore functional vision by electrically stimulating remaining cells in the retina, analogous to cochlear implants. However, these devices stimulate retinal axon fibers as well as cell bodies: this leads to elongated and poorly localized percepts that severely limit the quality of the generated visual experience1. We previously developed a computational model that describes these distortions and accurately predicts a patient’s perceptual experience for any pattern of electrical stimulation 3–5. However, improving the design of neuroprosthetic devices will require a solution of the inverse problem: What is the optimal stimulation protocol that elicits a desired visual percept? To answer this, we used our model to generate synthetic data that predicted elicited percepts in an Argus II epiretinal prosthesis patient. These synthetic percepts were used as features in a regularized regression optimized to find the stimulation protocols that would minimize perceptual distortions of Snellen letters. Compared to conventional protocols currently used in patients, in which each electrode is stimulated with an amplitude that is linearly related to the luminance of the corresponding location in the visual field, the percepts produced with the optimized stimulation protocols confer a potential substantial advantage, both in terms of expected visual acuity and overall delivered charge: Stimulation protocols proposed by the algorithm only sparsely activated the electrode array and compensated for the perceptual distortions thought to be caused by axonal stimulation. Future work will include more sophisticated machine learning methods that can compensate for spatiotemporal distortions across a wider range of implants.
Emily Stone1, Elham Bayat-Mokhtari1, J. Josh Lawrence2
1University of Montana, Department of Mathematical Sciences, Missoula, MT, United States; 2Texas Tech University Health Sciences Center, Department of Pharmacology and Neuroscience, Lubbock, TX, United States
Correspondence: Emily Stone (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P97
Simple models of short term synaptic plasticity that incorporate facilitation and/or depression have been created in abundance for different synapse types and circumstances. The analysis of these models has included computing mutual information between a stochastic input spike train to the presynaptic synapse, and some sort of representation of the postsynaptic response. While this approach has proven useful in many contexts, for the purpose of determining the type of process underlying a stochastic output train, it ignores the ordering of the responses, leaving an important characterizing feature on the table. In this work we use a broader class of information measures on output only, and specifically construct hidden Markov models (known as epsilon machines or causal state models) to differentiate between synapse type, and classify the complexity of the process. We find that the machines allow us to differentiate between processes that otherwise have similar output distributions. We are also able to understand these differences in terms of the dynamics of the model used to create the output response, bringing the analysis full circle. Hence this technique provides a complimentary description of the synaptic filtering process, and potentially expands the interpretation of future experimental results.
Mika Jain, Jack Lindsey
Stanford University, Departments of Physics, Computer Sciences & Biology, NYC, NY, United States
Correspondence: Mika Jain (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P98
We introduce a computational model capturing the high-level features of the complementary learning systems (CLS) framework. In particular, we model the integration of episodic memory with statistical learning in an end-to-end trainable neural network architecture. We model episodic memory with a nonparametric module which can retrieve past observations in response to a given observation, and statistical learning with a parametric module which performs inference on the given observation. We demonstrate on vision and control tasks that our model is able to leverage the respective advantages of nonparametric and parametric learning strategies, and that its behavior aligns with a variety of behavioral and neural data. In particular, our model performs consistently with results indicating that episodic memory systems in the hippocampus aid early learning and transfer generalization. We also find qualitative results consistent with findings that neural traces of memories of similar events converge over time. Furthermore, without explicit instruction or incentive, the behavior of our model naturally aligns with results suggesting that the usage of episodic systems wanes over the course of learning. These results suggest that key features of the CLS framework emerge in a task-optimized model containing statistical and episodic learning components, supporting several hypotheses of the framework.
Matt Valley1, Michael Moore2, Jun Zhuang1, Natalia Mesa1, Mark Reimers2, Jack Waters1
1Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States; 2Michigan State University, Department of Neuroscience, East Lansing, MI, United States
Correspondence: Matt Valley (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P99
Thomas Nowotny1, Eleni Vasilaki2, Andrew O. Philippides1, Paul R. Graham3, Lars Chittka4, Mikko Juusola5, James A. R. Marshall2
1University of Sussex, School of Engineering and Informatics, Brighton, United Kingdom; 2University of Sheffield, Department of Computer Sciences, Sheffield, United Kingdom; 3University of Sussex, School of Life Sciences, Brighton, United Kingdom; 4Queen Mary, University of London, School of Biological & Chemical Sciences, London, United Kingdom; 5University of Sheffield, Department of Biomedical Science, Sheffield, United Kingdom
Correspondence: Thomas Nowotny (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P100
What if we could design an autonomous flying robot with the navigational and learning abilities of a honeybee?
In the ‘Brains on Board’ project we have brought together experts in computational neuroscience, bio-inspired robotics, animal behaviour and neurophysiology from three UK universities to realize this vision. Autonomous control of mobile robots requires robustness to environmental and sensory uncertainty, and the flexibility to deal with novel environments and scenarios. Animals solve these problems by having flexible brains capable of unsupervised pattern detection and learning. Even ‘small’-brained animals like bees exhibit sophisticated learning and navigation abilities using very efficient brains of only up to 1 million neurons, 100,000 times fewer than in a human brain. Crucially, these mini-brains nevertheless support high levels of multitasking and they are adaptable, within the lifetime of an individual, to completely novel scenarios; this is in marked contrast to typical control engineering solutions. In the Brains on Board project we fuse computational and experimental neuroscience to develop a ground-breaking new class of highly efficient robot controllers, able to exhibit adaptive behaviour while running on powerful yet lightweight accelerated embedded systems hardware such as NVIDIA’s Jetson TX2 and Movidius’ Myriad II systems. On this poster we present an overview of the Brains on Board project and discuss preliminary results:
1. We have developed the SpineCreator-SpineML-GeNN toolchain to make best use of embedded GPU accelerators for autonomous robots and obtain sufficient compute power to run bee brain simulations in real time on a flying robot.
3. We have created a bee virtual reality system for closed-loop behavioural experiments with walking bees.
4. We have obtained large quantities of 2D bee flightpath data through radar tracking and a 3D harmonic radar tracking system is close to completion.
5. We have developed novel computational neuroscience models for reward estimation in bees and fruit flies, models of the visual system and oculo-motor reflex, and a model of the central complex related to navigation.
The Brains on Board project is financed by the Engineering and Physical Sciences Research council (EPSRC), grant EP/P.006094/1.
Sebastien Naze, James Humble, James Kozloski
IBM TJ Watson Research Center, Multiscale Brain Modeling and Neural Tissue Simulation, Yorktown Heights, NY, United States
Correspondence: Sebastien Naze (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P101
Abnormal gamma band power across cortex and striatum is observed in Huntington’s disease (HD) in both patients and animal models. The origin of this phenomenon is not well understood, nor is its functional relevance to disease pathology. To address the former, we developed three hypotheses and a computational model for fast-spiking interneurons (FSIs) that was based on observations from mice striatal anatomy and physiology. First, we considered if abnormal cortical activity alone can account for an increased gamma power recorded in the striatum, with the common assumption that FSIs are responsible for such high frequency oscillations. Second, we asked if a reorganization of corticostriatal projections in terms of driving strength can account for increased gamma in the striatum. Third, we considered if changes within the striatal micro-circuit can explain the increase in gamma power therein. Changes of peak gamma frequency and power ratio were readily reproduced by our computational model, accounting for several experimental findings reported in the literature. Our results also suggest that cortical changes alone are unlikely to account for the full range of phenomena observed in striatum, and that instead both a reorganization of corticostriatal drive and specific population changes to intra-striatal synaptic coupling are present in HD.
Kamrun Mukta, Xiao Gao, Peter Robinson, James MacLaurin
The University of Sydney, School of Physics, NSW, Sydney, Australia
Correspondence: Kamrun Mukta (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P102
Corticothalamic neural field theory (NFT) has successfully explained a wide variety of phenomena, ranging from EEG spectra and evoked potentials to nonlinear phenomena such as seizures and Parkinsonian oscillations. Measures such as spectra, correlation and coherence functions are widely used to probe cognitive events and information processing experimentally. Most recently, prior work showed that the eigenmodes of a single brain hemisphere are close analogs of spherical harmonics. They are also the building blocks for bihemispheric modes, whose structure and symmetry properties explain many features of resting state and task-related activity. This eigenmode expansion is of use because it helps us understand the dynamics of the brain’s activity in terms of its natural modes. Here, corticothalamic NFT is analyzed on a sphere and used to derive the transfer function, the power spectrum, the correlation function, and the cross spectrum in terms of spherical harmonics. The results are analyzed and compared with planar NFT in both finite and infinite geometries. The results of spherical and finite-planar geometries converge to the infinite-planar geometry in the limit of large brain size. The main effects of the spherical modal structure are explored, particularly to understand the number of modes that contribute significantly to these observable quantities and the effects of the finite spatial extent of the cortex.
The main results are that when we truncate the modal series it is found that, for physiology plausible parameters, only the lowest few spatial eigenmodes are needed for an accurate representation of macroscopic brain activity. Cortical modal effects can lead to a double alpha peak structure in the power spectrum, although the main determinant of the alpha peak is corticothalamic feedback. At the large brain size limit, spherical and finite-planar geometries converge to the infinite geometries. In the spherical geometry, the coherence function between points decays monotonically as their separation increases at a fixed frequency, but persists further at resonant frequencies. The correlation between two points is found to be positive, regardless of the time lag and spatial separation, but decays monotonically as the separation increases at fixed time lag. This analysis of physiologically-based corticothalamic NFT in a spherical geometry will enable more realistic modeling and analysis of experimental brain signals in future.
This work was supported by a University of Sydney International Scholarship (USydIS), by the Australian Research Council Center of Excellence for Integrative Brain Function (ARC Center of Excellence Grant CE140100007), and by the Australian Research Council Laureate Fellowship Grant FL140100025.
Petr Marsalek1, Jan Vokral2
1Charles University of Prague, Institute of Pathological Physiology, Praha, Czechia; 2Charles University of Prague, Department of Phoniatrics, Praha, Czechia
Correspondence: Petr Marsalek (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P103
We study how the two sound localization cues, interaural time difference (ITD) and interaural level difference (ILD) can be re-weighted in order to re-learn new peripheral condition in spatial hearing. In human the ITD is used for low frequency sound localization, the ILD is used for high frequency localization, and between 1000 Hz and 1500 Hz there is a transition zone, where both mechanisms play a role. The ITD and ILD are computed in the early (peripheral) binaural auditory pathway and then the information is transduced into the late (central) processing involving mainly cerebral cortex. Hearing impairment of certain type on one ear leads to re-calibration of the localization mechanisms. The ITD can deliver its time difference (phase difference) as long as the attenuation of the affected ear does not exceed the phase difference discrimination capabilities. The ILD re-calibration can be quickly re-learned to set a new level balance between the two ears to signal the sound direction in the intersection of the horizontal and the middle plane. Experiments show that such re-learning is accomplished in one to two days. These experiments of a partner group in our joint experimental and theoretical project aim at describing the situation in hearing impaired listeners and after introducing binaural hearing aids or binaural cochlear implants. We study the dynamic of the re-learning, re-learning spurious location with the enforced visual cue and spurious or distorted ITD and ILD cues. We will present preliminary results of phenomenological modeling the late (central) processing of the localization cues with the implications for further experimenting and the use of binaural hearing prosthetics for sound and speech localization.
Matias Calderini, Eric Kuebler, Philippe Lambert, Jean-Philippe Thivierge
University of Ottawa, Department of Psychology, Ottawa, Canada
Correspondence: Matias Calderini (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P104
Advances in recording of ongoing activity from large populations of neurons have increasingly shown that information processing arises from the collective behaviour of whole neural circuits. Both in vitro and in vivo recordings suggest that these circuits operate near a critical state poised between fully random and structured activity. Investigations on the role of neural criticality have focused on processing advantages in neural encoding, including transmission, storage and computational power . However, little attention has been paid to the role of neural criticality on accurate downstream decoding of information. The aim of this study is to understand the impact of neural criticality on linear readout of in vitro multi-electrode activity.
Beggs JM. The criticality hypothesis: how local cortical networks might optimize information processing. Philos Trans A Math Phys Eng Sci 2008, 366(1864), 329–343.
LeBlanc M, Angheluta L, Dahmen K, Goldenfeld N. Universal fluctuations and extreme statistics of avalanches near the depinning transition. Phys. Rev. E 2013, 87, 22126.
Subutai Ahmad1, Max Schwarzer2, Jeff Hawkins1
1Numenta, Redwood City, CA, United States; 2Pomona College, Department of Computer Science, Claremont, CA, United States
Correspondence: Subutai Ahmad (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P105
Michael Beyeler1, Emily L. Rounds2, Kristofor D. Carlson2, Nikil Dutt2, Jeffrey L. Krichmar2
1University of Washington, eScience Institute, Seattle, WA, United States; 2University of California, Irvine, Cognitive Sciences, Irvine, CA, CA, United States
Correspondence: Michael Beyeler (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P106
Supported by recent computational studies, nonnegative sparse coding (NSC) is emerging as a ubiquitous coding strategy across brain regions and modalities. A combination of nonnegative matrix factorization (NMF) and sparse coding, NSC allows populations of neurons to collectively encode high-dimensional stimuli spaces using a compressed, sparse, and parts-based neuronal code. Specifically, we argue that neuronal circuits can (1) achieve sparse codes through competition, and (2) implement NMF by utilizing spike-timing dependent plasticity with homeostasis (STDPH). We applied NMF to two different datasets: (1) receptive fields in the dorsal subregion of the medial superior temporal area (MSTd), and (2) neurophysiological and behavioral recordings from rat retrosplenial cortex (RSC). In both cases, we were able to show that applying NMF to major inputs into these brain regions can result in a sparse representation that captures important aspects of neuronal response properties of these brain regions. Furthermore, we found similar results applying STDPH to the RSC dataset. These findings support a growing body of evidence that suggests biological neurons use plasticity, such as STDPH, to produce sparse, compact stimulus representations that vastly reduce the dimensionality of their inputs..
Jonathan Rubin1, Jessica Ausborn2, Abigail Snyder3, Ilya Rybak2, Jeffrey Smith4
1University of Pittsburgh, Department of Mathemathics, Pittsburgh, PA, United States; 2Drexel University, Neurobiology & Anatomy, PA, United States; 3Pacific Northwest National Laboratory, WA, United States; 4National Institute of Health, MD, United States
Correspondence: Jonathan Rubin (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P107
Various neuronal circuits, including a range of central patterns generators (CPGs) in the brainstem and spinal cord of many species, exhibit rhythmic activity patterns. In many CPGs, these patterns consist of sequential activations of different neuronal populations that interact through synaptic connections. Significant effort has gone into exploring, using experimental and theoretical methods, the extent to which the intrinsic bursting or pacemaking capabilities of neurons within these populations are responsible for the existence of the network rhythms in which they participate. For example, experimental studies have established the existence of intrinsically bursting neurons in the pre-Botzinger complex (preBotC) of the mammalian respiratory brainstem, and certain experimental manipulations of burst-supporting conductances in these neurons have eliminated respiratory rhythms. Moreover, recent optogenetic studies in the rodent spinal cord have shown that neurons active in extensor or flexor phases of locomotor rhythms can autonomously generate rhythmic activity. These studies, however, leave open an important question: What happens to this intrinsic bursting when the burst-capable neurons are embedded within the full network with which they interact? In many cases, it remains unknown whether the intrinsic bursting capabilities of subsets of neurons affect the emergent dynamics once these neurons are embedded within a synaptically interconnected circuit and how this bursting capability contributes to the properties of these circuits’ rhythmic outputs. In this study, we use highly reduced neuronal models of CPGs composed of small numbers of neuronal populations to highlight some key principles relating to these issues. In particular, we show that neurons’ intrinsic dynamic properties naturally become masked by the network interactions that support multi-phase rhythmic outputs. We establish these results using two models: a half-center locomotor network in which extensor and flexor units are coupled with reciprocal synaptic inhibition and a respiratory network comprising several neuronal populations, including respiratory neurons in the preBotC. In the locomotor case, we demonstrate that changes in drives that switch units’ intrinsic dynamics from oscillatory or bursting to tonic spiking have no impact on the existence or frequency of network rhythms. Effects of drives on rhythm frequency are shown to derive instead from the transition mechanisms, such as escape or release, underlying phase switching within the rhythms, with particular transition mechanisms persisting across parameter changes that alter intrinsic dynamics. Subtly, however, intrinsic dynamics can affect which transition mechanisms can arise within a given parameter regime. In the respiratory case, we similarly illustrate a lack of impact of preBotC intrinsic dynamics on a variety of properties of network rhythms including frequency and amplitude responses to changes in drive; consequences of modulation of inhibition; and even effects of blockade of the persistent sodium current that may underlie the intrinsic rhythmicity within the preBotC. We also show that inclusion of a second excitatory component in the network, the recently identified post-inhibitory complex (PiCo), has little effect on network rhythms, despite the intrinsic oscillation capability of the PiCo.
P108 Modeling predicts altered ion channel mechanisms and firing properties in striatal neurons of the Q175 mouse model of Huntington’s disease
Hanbing Song1, Christina Weaver1, Joseph Goodliffe2, Jennifer Luebke2
1Franklin and Marshall College, Department of Mathematics and Computer Science, Lancaster, PA, United States; 2Boston University School of Medicine, Department of Anatomy and Neurobiology, Boston, MA, United States
Correspondence: Hanbing Song (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P108
Huntington’s disease (HD) is a neurodegenerative disorder with severe movement and cognitive dysfunction. Structural and functional neuropathology in HD occurs in the striatum, mainly targeting medium spiny neurons (MSNs), which are regulated largely by striatal fast spiking interneurons (FSIs). MSNs are categorized by the expression of dopamine receptors (D1 or D2) and their contribution to the direct (D1) and indirect (D2) pathways of the basal ganglia. Q175, a transgenic mouse model of HD, exhibits molecular phenotype changes, neuronal dysfunction, and involuntary limb movement. Our recent in vitro work showed increased input resistance in both D1 and D2 MSNs of 12-month old Q175 mice compared to wildtype (WT), but reduced rheobase and action potential amplitudes only in D1 MSNs of Q175 versus WT . This modeling study aims to identify mechanisms that might account for this differential vulnerability, allowing us to gain further insight into striatal dysfunction mechanism in the context of HD. We constructed a 122-compartment conductance-based MSN model in NEURON, based on two published models [2, 3]. We used our recent optimization method  to fit parameters controlling the conductance and kinetics of several ion channels of the model to empirical data from several D1 and D2 neurons in WT and Q175 mice. Error functions comprised multiple features of voltage traces from several current clamp steps. Applying machine learning techniques that rank parameters’ importance to firing properties reduced the number of optimized parameters from 17 to 8. This technique was also used to fit parameters of an FSI model to data from WT and HD model mice. Compared to WT MSN models, the Q175 MSN models had lower conductances of fast and persistent sodium (Na+), slow A-type potassium (K+), and T-type calcium channels. These findings were consistent with published RNA sequencing analysis in the striatum of Q175 mice [5, 6]. Rheobase, differentially reduced in D1 but not D2 neurons of Q175 mice, is a strong correlate of neuronal suprathreshold excitability. Analyses showed that the conductance of the persistent Na+ , fast and slow A-type K+, and delayed rectifying K+ channels were the most important determinants of rheobase in our models. The mean conductance of persistent Na+ and slow A-type K+ channels were decreased in both Q175 D1 and D2 MSN models; delayed rectifier K+ channel conductance was reduced only in Q175 D1 MSN models. Adjusting conductance parameters of the fitted WT MSNs based on known up/downregulation of certain genes in Q175 mice was sufficient to account for the rheobase differences between WT and Q175 for D1 but not D2 model MSNs. This computational study of cellular modeling study complements our recent findings of increased dendritic branching complexity and lower EPSC frequency in D1 but not D2 MSNs of Q175 mice . Together this work lays the foundation for constructing a model of the pathological effects of HD on the striatal network.
Goodliffe J et al. Differential changes to D1 and D2 Medium Spiny Neurons in the 12-month-old Q175 ± mouse model of Huntington’s Disease. Submitted, (2018).
Wolf JA, Moyer JT, Lazarewicz MT, et al. NMDA/AMPA Ratio Impacts State Transitions and Entrainment to Oscillations in a Computational Model of the Nucleus Accumbens Medium Spiny Projection Neuron. J Neurosci 2005, 25:9080–9095.
Evans RC, Morera-Herreras T, Cui Y, et al. The Effects of NMDA Subunit Composition on Calcium Influx and Spike Timing-Dependent Plasticity in Striatal Medium Spiny Neurons. PLoS Comput Biol 2012, 8:e1002493.
Rumbell TH, Dragulic D, Yadav A, et al. Automated evolutionary optimization of ion channel conductances and kinetics in models of young and aged rhesus monkey pyramidal neurons. J Comput Neurosci 2016, 41:65–90.
Beaumont V, Zhong S, Lin H, et al. Phosphodiesterase 10A Inhibition Improves Cortico-Basal Ganglia Function in Huntington’s Disease Models. Neuron 2016, 92:1220–1237.
Langfelder P, Cantle JP, Chatzopolou D, et al. Integrated genomics and proteomics define huntingtin CAG length-dependent networks in mice. Nat Neurosci 2016, 19:623–633.
P109 Influence of cortical network topology and delay structure on EEG rhythms in a whole-brain connectome-based thalamocortical neural mass model
John Griffiths1, Jeremie Lefebvre2
1Rotman Research Institute, Baycrest Health Sciences, Toronto, Canada; 2Krembil Research Institute, University Health Network, Toronto, Canada
Correspondence: John Griffiths (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P109
Large-scale oscillatory activity such as that observed in human M/EEG is believed to arise from a combination of cortical (e.g. intracolumnar excitatory-inhibitory coupling) and thalamocortical rhythmogenic mechanisms. Whilst considerable progress has been made to characterize these mechanisms separately, relatively little work has been done that attempts to unify intracortical and thalamocortical rhythmogenesis within a single theoretical framework. Building on previous work [1–8], here we present and examine a whole-brain connectome-based neural mass model that combines detailed long-range cortico-cortical connectivity based on primate and human tract tracing data with strong, recurrent thalamocortical circuitry. In the model each network node represents an individual cortico-thalamo-cortical motif with four components: a classic Wilson-Cowan1ensemble of excitatory and inhibitory cortical neuronal populations, coupled to a pair of excitatory specific relay and inhibitory reticular thalamic nucleus populations. This system is able to reproduce a variety of known features of human M/EEG recordings, including a 1/f spectral profile; spectral peaks in the alpha, theta, beta, and gamma ranges; and distance-dependent covariance (functional connectivity) structure that is shaped by the underlying anatomical connectivity. Consistent with previous theoretical and experimental observations [2, 3], we also find that increasing sensory drive to thalamic regions triggers a suppression of dominant low frequency rhythms in favour of higher-frequency activity, and also results in an increased susceptibility to entrainment of the entire system by exogeneous stimulation. We find that increasing cortico-cortical connectivity does not disrupt but in fact stabilizes the thalamocortical alpha rhythm, and that varying cortico-cortical conduction delays within physiologically plausible limits modifies, but does not fundamentally alter, the power spectrum and overall dynamics. Finally, we investigate the role of convergence and divergence of corticothalamic and thalamocortical projections, respectively, in determining oscillatory and resonance behaviour in the model, and their implications for the role of the thalamus in promoting and coordinating cortico-cortical synchronization. Taken together, our results clarify the role of cortical network topology and conduction delay structure in shaping both thalamocortical and corticortical rhythmic activity and large-scale brain communication.
Lefebvre J, Hutt A, Frohlich F. Stochastic resonance mediates the state-dependent effect of periodic stimulation on cortical alpha oscillations. eLife 2017, e32054 (2017).
Mierau A, Klimesch W, Lefebvre J. State-dependent alpha peak frequency shifts: Experimental evidence, potential mechanisms and functional implications. Neuroscience 2017, 360, 146–154.
Alagapan S, Schmidt SL, Lefebvre J, et al. Modulation of Cortical Oscillations by Low-Frequency Direct Cortical Stimulation Is State-Dependent. PLOS Biology 2016, 14, e1002424.
van Albada SJ, Robinson PA. Relationships between electroencephalographic spectral peaks across frequency bands. Frontiers in Human Neuroscience 2013, 7, 56.
Robinson R, O’Connor R, Gordon. Philosophical Transactions of the Royal Society B: Biological Science 2005, 360, 1043–1050.
Robinson, Whitehouse, Rennie.Physical review. E, Statistical, nonlinear, and soft matter physics 2003, 68.
Cona F, Lacanna M, Ursino M. A thalamo-cortical neural mass model for the simulation of brain rhythms during sleep. Journal of Computational Neuroscience 2014, 37, 125–148.
Kunze T, Hunold A, Haueisen J, et al. Transcranial direct current stimulation changes resting state functional connectivity: A large-scale brain network modeling study. NeuroImage 2016, 140, 174–187.
Wilson HR, Cowan JD. Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal 1972, 1–24.
Chang-Eop Kim, Jihong Oh
Gachon Uniersity, Department of Physiology, Seoul, Korea, Republic of
Correspondence: Chang-Eop Kim (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P110
Neurons are conventionally said to be “specific” or “selective” to a specific feature of stimulus if it responds differentially to the feature characterizing the given stimulus. For instance, neurons in the primary somatosensory cortex (S1) have been classified as “noxious-specific” when they respond to pinching by forceps (noxious stimulus), but not to bush stroke (innocuous stimulus) in many studies. Despite the widespread adoption of this simple approach, however, it should be recognized that the given stimulus could have another feature that can be encoded by the neurons, such as texture or dynamics. If we consider these additional features as candidates for the selectiveness of the neurons, the differential responsiveness of the neurons to pinching or brush stroke cannot be interpreted as “noxious-specific” or not. In this case, additional stimulus that has distinct feature characteristics with pinching and brush could help characterizing the neural selectivity. Indeed, we found many S1 neurons of mice showing differential responsiveness to pinching by forceps are not “noxious-specific”, but selective to the features of texture or dynamics by applying 3 types of stimuli with distinct feature characteristics (pinching by forceps, brush stroke, and touching by forceps) using in vivo two-photon Ca2+ imaging. Moreover, we introduce a theoretical framework to characterize the neural selectivity in multidimensional sensory feature space, which are based on the stimulus-feature design matrix and the acquired experimental results. 1. If all feature vectors of the stimulus-feature matrix are unique and the number of unique feature vector (d) equals to 2 s, (s is the number of stimuli with unique feature characteristics), unique selectivity of neurons can be specified, regardless of the experimental results. 2. If there is a unique orthogonal (hyper) plane that can be implemented to classify the experimental results, unique selectivity of neurons can be specified. 3. If there are orthogonal (hyper) plane implementable, but they are not unique, the selectivity cannot be specified and more stimuli are necessary for characterizing the neural selectivity. 4. If there is no orthogonal (hyper) plane implementable to the experimental results, there are two scenarios. First, it can be possible to add reasonable unique feature vector and try to implement orthogonal plane again. Second, interpret the results as “mixed selectivity” of the neurons. We systematically analyzed previous studies characterizing selectivity of sensory neurons using brush and forceps based on our framework and it turned out that many of the previous results that characterized neural selectivity cannot be justified in multidimensional sensory feature space.
Uttara TIpnis1, Enrico Amico1, Linhui Xie2, Jingwen Yan3, Michael Wang1, Mario Dzemidzic4, David Kareken4, Li Shen5, Joaquin Goni1
1Indiana University-Purdue University, School of Industrial Engineering, West Lafayette, IN, United States; 2Indiana University-Purdue University, Electrical and Computer Engineering, Indianapolis, IN, United States; 3Indiana University-Purdue University, School of Informatics and Computing, Indianapolis, IN, United States; 4Indiana University School of Medicine, Department of Neurology, Indianapolis, IN, United States; 5University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, United States
Correspondence: Uttara TIpnis (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P111
Physical connections between different human gray matter regions occur through long-range white-matter fiber-bundles. These fiber-bundles can be traced through diffusion weighted imaging and processed to estimate a whole-brain structural connectivity (SC) matrix (the human connectome). Using this network approach, anatomic connections between any two brain regions connected by white matter fibers (or streamlines) constitute anedge. This complex topological organization is based partly on genetics and environment, with a high common architecture across individuals. However, a more unique individual fingerprintt relies on deviations from this common architecture due to genetics and environment. Here we expand a recently proposed framework obtaining optimal identifiability in brain connectomics, to identify the extent to which genetically identical mono-zygotic (MZ) twins share SC, and to isolate the sub-circuits that display high MZ twin shared (or genetic) fingerprinting. To assess the results, we used the same approach for test–retest of di-zygotic (DZ) twins, and a null model based on randomly shuffling the SC profiles of the MZ group. Test–retest of the same subjects is an upper-boundary for the expected MZ identifiability, whereas DZ is a lower boundary and shuffled MZ is a null model for identifiability. The data sample included 148 pairs of twins from the Human Connectome Project (HCP), 74 MZ pairs and 74 DZ pairs. Weighted SC matrices included, for every edge, the average fractional anisotropy (FA) of the streamlines connecting each pair of brain regions within a multimodal 374 region parcellation. To avoid solutions with negative values, we used the non-negative matrix factorization (NNMF) heuristic algorithm to decompose and subsequently reconstruct SC matrices for a different number of components (ranging from 2 to 74). For each decomposition, we calculated the explained variance of each component and components were added in an explained variance descending fashion while evaluating the differential identifiability. Optimal reconstruction was obtained by choosing the reconstruction that corresponds to the maximum differential identifiability (Idiff). As recently proposed, Idiffis measured as the correlation gain in same-subject test–retest with respect to between-subject gain. Note that we here expand this concept to MZ- and to DZ-twins, hence allowing for genetic heritability as well as environmental fingerprint evaluation. At optimalIdiff, individual SC were reconstructed for MZ and DZ subjects, and pairwise intra-class correlations (ICC) for every edge were obtained. Finally, we obtained a differential ICC matrix (i.e., ICCMZ–ICCDZ). Large positive ICC values indicate edges of high heritability, accounting for environment. The regions with most highly heritable connections include: parietal superior (L), precuneus (L), cingulum medial (L), cingulum anterior (R), temporal inferior (L), and fusiform (L). In summary, through a novel data-driven framework which expands on a recent approach for optimal identifiability on test–retest data, we can detect the most important structural connections and subsequent gray-matter regions that are associated with heritability.
Leonid Rubchinsky1, Joel Zirkle2
1Indiana University Purdue University Indianapolis & Indiana University School of Medicine, Department of Mathematical Sciences & Stark Neurosciences Research Institute, Indianapolis, IN, United States; 2Indiana University Purdue University Indianapolis, Department of Mathematical Sciences, Indianapolis, IN, United States
Correspondence: Leonid Rubchinsky (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P112
Synchronization of neural activity has been associated with several neural functions. Abnormalities of neural synchrony may underlie different neurological and neuropsychiatric diseases. Neural synchrony in the brain at rest is usually very variable and intermittent. Experimental studies of neural synchrony in different neural systems report a feature which appears to be universal: the intervals of desynchronized activity are predominantly very short (although they may be more or less numerous, which affects average synchrony). This kind of short desynchronization dynamics was conjectured to potentially facilitate efficient creation and break-up of functional synchronized neural assemblies. Cellular, synaptic, and network mechanisms of the short desynchronizations dynamics are not fully understood. In this study we use computational neuroscience methods to investigate the effects of spike-timing-dependent plasticity (STDP) on the temporal patterns of synchronization. We employed a minimal network of two simple conductance-based model neurons mutually connected via excitatory STDP synapses. The dynamics of this model network was subjected to the time-series analysis methods used in prior experimental studies. We found that STDP may alter synchronized dynamics in the network in several ways depending on the time-scale of action of plasticity. However, in general, the action of STDP tends to promote dynamics with short desynchronizations similar (i.e. dynamics similar to those observed in prior experiments). Complex interplay of the cellular and synaptic dynamics may lead to the activity-dependent adjustment of synaptic strength in such a way as to facilitate short desynchronizations in the activity of weakly coupled intermittently synchronized neurons.
P113 Modeling the variability of spontaneous astrocyte calcium activity and responses to repeated stimuli
Marsa Taheri1, John A. White2
1University of Utah, Department of Bioengineering, Salt Lake City, UT, United States; 2Boston University, Biomedical Engineering, Boston, MA, United States
Correspondence: Marsa Taheri (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P113
Accumulating evidence suggests that astrocytes, a major glial cell type, communicate bidirectionally with neurons and play many important roles in the mammalian brain, such as modulating synaptic transmission. Many of these functions are regulated by or linked to astrocyte intracellular Ca2+ signaling. We showed in our recent experimental and computational work [1, 2] that astrocyte Ca2+ transients evoked by a single, focal application of ATP (activating astrocyte G-protein coupled receptors) are temporally heterogeneous due to specific variability in the biological mechanisms underlying the Ca2+ transients. In our current work, we examine astrocyte Ca2+ activity in response to multiple deliveries of ATP stimuli, to assess how astrocytes may respond to neuronal activity and what their Ca2+ dynamics under different experimental conditions reveal about the inputs they are receiving. We use two-photon microscopy to measure Ca2+ activity in mouse cortical astrocytes expressing the genetically-encoded Ca2+ indicator GCaMP5G. We evoke Ca2+ activity through brief (60 ms), focal applications of ATP with varying application time intervals (from 15 s to 4 min). We find that these evoked Ca2+ transients are much more variable than responses to single stimuli. This added variability arises mainly from interactions related to the timing of repeated stimuli, temporally heterogeneous Ca2+ responses to each stimulus (including variability in response latency), and spontaneous/intrinsic astrocyte Ca2+ activity (which is also noisy and unpredictable). Given this high variability, we are interested to see whether we can observe any patterns in the evoked Ca2+ responses and to better understand the variability underlying these responses. We use a phenomenological, statistical modeling approach (rather than a biophysically detailed, mechanistic one) to examine our data, due to the complexity of the data and the fact that many details of the biological mechanisms underlying spontaneous and evoked astrocyte Ca2+ activity remain unknown. First, we ignore the variability in the shape of Ca2+ responses and, instead, make the Ca2+ recordings binary, consisting of only two observable states: On or Off. We then examine the On and Off dwell times for both spontaneous and evoked astrocyte Ca2+ activity, and develop Hidden Markov Models based on our results and knowledge of the biology underlying the Ca2+ activity. By comparing the results generated from these models (e.g. dwell times, the probability of a cellular region being On at any given time during the recording, etc.), we find that the simplest model that reproduces our results consists of 3 hidden states (an Off/Closed state and two On/Open states). Furthermore, we determine which transition rates, at the minimum, must change and by how much in order to switch the Ca2+ activity from spontaneous to evoked. Lastly, we simulate Ca2+ responses to multiple stimuli (by incorporating time-variable transition rates) with different time intervals of application and compare the variability in the resulting Ca2+ activity with our experimental data.
Taheri M, Handy G, Borisyuk A, White JA. Diversity of Evoked Astrocyte Ca2+ Dynamics Quantified through Experimental Measurements and Mathematical Modeling. Front Syst Neurosci. 2017, 11, 79.
Handy G, Taheri M, White JA, Borisyuk A. Mathematical investigation of IP3-dependent calcium dynamics in astrocytes. J Comput Neurosci. 2017, 42(3), 257–273.
P114 From connectivity to activity: Community detection reveals multiple simultaneous dynamical regimes within networks
Zoë Tosi1, John Beggs2
1Indiana University Bloomington, Cognitive Science Department, BLOOMINGTON, IN, United States; 2Indiana University Bloomington, Department of Physics, Bloomington, IN, United States
Correspondence: Zoë Tosi (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P114
Nigam, et al. Rich-club organization in effective connectivity. J. of Neuroscience 2016, 36(3), 670–684.
Lancichinetti, et al. Finding statistically significant communities in networks. PLOS One 2011, 6(4), e18961
Tosi, B. Cortical Circuits from Scratch: A Metaplastic Architecture.… arXiv 2017, 1706, 00133
Litwin-Kumar, Doiron B. Slow dynamics and high variability in balanced cortical network. Nat. neuro. 2013, 15(11), 1498
Aaron Regan Shifman, John Lewis
University of Ottawa, Department of Biology, Ottawa, Canada
Correspondence: Aaron Regan Shifman (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P115
The transmembrane ionic currents which underlie action potentials give rise to electric fields in the extracellular space. The high frequency component of these electric fields, due to spiking neurons, is referred to as multi-unit activity (MUA), whereas the lower frequencies, primarily due to synaptic activity, are referred to as local field potentials (LFPs). Interpretation of these signals and source-localization is often challenging, so accurate modeling approaches are critical. Typically, these fields are modeled in a post hoc form, i.e. a traditional neuronal model simulation is run, and then the electric fields are calculated from that simulation. Because the conductivity of the extracellular space is relatively high, the electric fields are generally assumed to be too weak to feedback and influence their own generation. However, in brain regions of lower conductivity, extracellular potentials may play a functional role by influencing membrane potentials, and therefore dynamics of nearby neurons—this is known as ephaptic coupling. The closed-loop nature of ephaptic coupling cannot be modeled using post hoc approaches. We are optimizing more appropriate methods to investigate how different conditions influence the magnitude of ephaptic effects. We have previously shown that extracellular field potentials in simplified networks of model cortical neurons can impede synchronization. In order to study these effects in greater detail, we have developed a generalized framework for modeling ephaptic coupling in morphologically more-realistic neurons. We compare the coupling properties of neurons with “stellate-like” and “pyramidal-like” morphologies to further understand the role that neural geometry plays in ephaptic coupling. Being able to efficiently explore ephaptic coupling from a computational perspective will allow us to better understand the conditions in which electric fields may influence neuronal dynamics in general.
Nathan Gouwens, Staci Sorensen, Jim Berg, Changkyu Lee, Tim Jarsky, Jonathan Ting, Michael Hawrylycz, Anton Arkhipov, Hongkui Zeng, Christof Koch, Susan Sunkin, David Feng, Colin Farrell, Hanchuan Peng, Ed Lein, Lydia Ng, Amy Bernard, John Phillips
Allen Institute for Brain Science, Modelling, Analysis and Theory, Seattle, WA, United States
Correspondence: Nathan Gouwens (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P117
Joshua Goldwyn1, Michiel Remme2, John Rinzel3
1Swarthmore College, Swarthmore, PA, United States; 2Humboldt University in Berlin, Institute for Theoretical Biology, Berlin, Germany; 3New York University, Center for Neural Science & Courant Institute of Mathematical Sciences, New York, NY, United States
Correspondence: Joshua Goldwyn (firstname.lastname@example.org)
BMC Neuroscience 2018, 19(Suppl 2):P118
Coincidence detector neurons are cells that generate spikes preferentially to synaptic inputs that arrive (nearly) simultaneously. Coincidence detection is a fundamental computation by which neurons extract timing information from their inputs. Examples of superb coincidence detectors are principal cells of the medial superior olive (MSO) in the mammalian auditory brainstem. MSO neurons encode sound source location with high temporal precision by distinguishing submillisecond timing differences among inputs. Distinctive biophysical properties contribute to the remarkable temporal precision of MSO neurons. For instance, inactivation of sodium current (INa) and activation of low-threshold potassium current (IKLT) provide dynamic, voltage-gated, negative feedback in subthreshold voltage ranges that can deny adequate summation and spike generation unless the inputs occur with near simultaneity [1, 3]. We investigate additional structural and dynamical specializations in coincidence detector neurons. Using mathematical analysis and simulations of a two-compartment neuron model, we show that the electrical coupling between soma and axon, as well as the distribution of INa and IKLTin soma and axon regions of a model MSO neuron, can be configured to enhance coincidence detection sensitivity. Specifically, we find that a two-compartment model with a “feedforward” configuration—one in which the input regions of a cell (soma and dendrites) strongly drive activity in the spike-generating output region (axon), but backpropagation from the axon into the soma is weak—is significantly advantageous for coincidence detection. In the feedforward configuration, spikes are generated with greater efficiency (fewer INa channels) than a one-compartment model. In addition, INa inactivates more than in models with weak feed forward coupling. The feedforward configuration can, therefore, more effectively enable INainactivation to prevent spike-generation in response to non-coincident inputs. A dynamic IKLT current further enhances coincidence detection sensitivity in these models. Our findings confirm and elucidate physiological studies of MSO neurons, such as the observation that the site of spike-generation is electrically isolated from the soma with weak backpropagation of action potentials . An innovation in our method is to formulate a family of two-compartment neuron models, parameterized by the strength of coupling between input regions (soma + dendrite) and output regions (axon) of a cell. We create a parameter space of coupling configurations, and systematically investigate this family of models to study the relationships between structure, dynamics, and computation in coincidence detection neurons. While our work focuses on the remarkable MSO neurons, our framework can be used more generally to explore effects of soma-axon coupling on dynamics and computation in neurons well-described by a two-compartment framework.
Huguet G, Meng X, Rinzel, J. Phasic Firing and Coincidence Detection by Subthreshold Negative Feedback: Divisive or Subtractive or, Better, Both. Frontiers in Computational Neuroscience 2017, 11, http://doi.org/10.3389/fncom.2017.00003
Scott LL, Hage TA, Golding NL. Weak action potential backpropagation is associated with high‐frequency axonal firing capability in principal neurons of the gerbil medial superior olive. The Journal of Physiology 2017, 583(2), 647–661. http://doi.org/10.1113/jphysiol.2007.136366
Svirskis G, Kotak VC, Sanes DH, Rinzel J. Sodium Along With Low-Threshold Potassium Currents Enhance Coincidence Detection of Subthreshold Noisy Signals in MSO Neurons. Journal of Neurophysiology 2004, 91(6), 2465. http://doi.org/10.1152/jn.00717.2003
Ryan Phillips, Jonathan Rubin
University of Pittsburgh, Department of Mathemathics, Pittsburgh, PA, United States
Correspondence: Ryan Phillips (email@example.com)
BMC Neuroscience 2018, 19(Suppl 2):P119
The substantia nigra pars reticulata (SNr) is one of the primary output nuclei of the basal ganglia and receives converging GABAA receptor mediated synaptic inputs from the direct and indirect pathways. Due to this convergence, the SNr is thought to be important in behaviors associated with these two pathways such as decision making and motor control. Consistent with this idea, abnormal activity within the SNr is associated with parkinsonian symptoms, seizures and impaired decision making. Therefore, understanding how the SNr integrates inputs from these two pathways may be critical for understanding basal ganglia function.
The projections from indirect and direct pathways form synapses at distinct locations on SNr neurons and are known to undergo short-term plasticity. Striatal neurons of the direct pathway preferentially form synapses on the distal dendrites of the SNr neurons and undergo synaptic facilitation [1, 2]. In contrast, neurons from the external segment of the globus pallidus of the indirect pathway form basket-like synapses around the somas of SNr neurons and undergo synaptic depression [1, 3]. The functional significance of the location of these synapses is unclear; however, these spatial characteristics may influence their short-term plasticity properties. GABAA synapses are prone to breakdown of the reversal potential (EGABA) mediated by increases in the intracellular Cl-concentration [Cl-]i . Due to the differences in size and in the distribution of the Cl-extruder KCC2, we hypothesize that dendritic and somatic compartments may have different susceptibilities to breakdown of EGABA, which may contribute to differences in the properties of direct and indirect pathway synapses on SNr neurons. To test this hypothesis, we constructed a novel conductance-based model of an SNr neuron with dendritic and somatic compartments. After establishing that the model’s dynamics matches a range of experimental observations on SNr firing patterns, we used the model to investigate the effects of [Cl-] dynamics on EGABA and short-term synaptic plasticity. We show that GABAA- and KCC2-mediated fluctuations in [Cl-]ican explain many aspects of the short-term plasticity seen with GABAergic inputs from the direct and indirect pathways in the SNr. Integration of GABAA receptor-mediated synaptic inputs to somatic and dendritic compartment is not unique to SNr neurons and therefore these results may have implications for other brain regions.
Connelly WM, et al. Differential short-term plasticity at convergent inhibitory synapses to the substantia nigra pars reticulata. Journal of Neuroscience 2010, 30, 44, 14854–14861.
Von Krosigk M, et al. Synaptic organization of GABAergic inputs from the striatum and the globus pallidus onto neurons in the substantia nigra and retrorubral field which project to the medullary reticular formation. Neuroscience 1992, 50, 3, 531–549.