 Research article
 Open Access
 Published:
Functional twoway analysis of variance and bootstrap methods for neural synchrony analysis
BMC Neuroscience volume 15, Article number: 96 (2014)
Abstract
Background
Pairwise association between neurons is a key feature in understanding neural coding. Statistical neuroscience provides tools to estimate and assess these associations. In the mammalian brain, activating ascending pathways arise from neuronal nuclei located at the brainstem and at the basal forebrain that regulate the transition between sleep and awake neuronal firing modes in extensive regions of the cerebral cortex, including the primary visual cortex, where neurons are known to be selective for the orientation of a given stimulus. In this paper, the estimation of neural synchrony as a function of time is studied in data obtained from anesthetized cats. A functional data analysis of variance model is proposed. Bootstrap statistical tests are introduced in this context; they are useful tools for the study of differences in synchrony strength regarding 1) transition between different states (anesthesia and awake), and 2) affinity given by orientation selectivity.
Results
An analysis of variance model for functional data is proposed for neural synchrony curves, estimated with a crosscorrelation based method. Dependence arising from the experimental setting needs to be accounted for. Bootstrap tests allow the identification of differences between experimental conditions (modes of activity) and between pairs of neurons formed by cells with different affinities given by their preferred orientations. In our test case, interactions between experimental conditions and preferred orientations are not statistically significant.
Conclusions
The results reflect the effect of different experimental conditions, as well as the affinity regarding orientation selectivity in neural synchrony and, therefore, in neural coding. A crosscorrelation based method is proposed that works well under low firing activity. Functional data statistical tools produce results that are useful in this context. Dependence is shown to be necessary to account for, and bootstrap tests are an appropriate method with which to do so.
Background
The nervous system consists of a very large number of neurons—and glial cells—that are connected in complex networks. Neurons convey information by means of electrical pulses, called action potentials or spikes, consisting of a very fast and transient depolarization of their membrane potential. Sequences of spikes, called spike trains, are believed to serve as an information code in the brain. Pairwise synchrony between spike trains is widely studied in neuroscience as a way to understand these codes. Although neuronal interactions are probably not described only by pairwise associations [1], they have been shown to provide important information about neural coding [2–5]. On the other hand, the definition of neural synchrony is a matter of debate and different conceptions of it exist, such as “exact spiking coincidence” or “firing rate association”. There are several existing models for synchrony between simultaneous spike trains, most of them based on crosscorrelation analysis. Common methods to measure synchrony are, for example, the crosscorrelogram or the joint peristimulus time histogram [6]. Other methods not based on crosscorrelation analysis include unitary event analysis [7, 8], conditional synchrony measure [9] and a method based on the distances between the closest spikes [10], among others. Most commonly, association measures are used to test for the presence/absence of synchrony. Several methodologies exist for this aim [11, 12]; however, in this paper we address a different problem. We use a crosscorrelation based method, called the Integrated Crosscorrelation Synchrony Index (ICCSI), which allows us to obtain a synchrony curve as a function of time and search for differences in the strength of pairwise associations regarding different factors. A similar problem was considered by Faes et al. [9], where these authors compare the synchrony between neurons under experimental conditions with synchrony at baseline.
During deep sleep, neurons in the cerebral cortex present a highly oscillatory global activity that can be observed by means of an electroencephalogram (EEG) or an electrocorticogram (ECoG). Furthermore, in the awake state, these global oscillations disappear giving rise to a less synchronized global activity. The transition from the sleep state to the awake state is regulated by the activating ascending pathways, which are afferent neuronal pathways originating in nuclei located at the basal forebrain and the brainstem [13, 14]. Under general anesthesia, the EEG is characterized by the presence of a number of patterns (spindles, Kcomplexes, delta waves) that show a progressive increase in lowfrequency, highamplitude activity [15–17]. In this scenario, the transition to the awakelike state can also be reproduced by means of microelectrical stimulation of some activating ascending areas [18]. On the other hand, an important property of neurons in the primary visual cortex (V1) is orientation selectivity; i.e., the neurons respond better (with a higher firing rate) to a specific orientation of the visual stimulus, the socalled preferred orientation [19]. This property is important in neurophysiological studies because it can provide meaningful clues regarding the physiology and microanatomy of the striate cortex.
The aim of the present study is to investigate the differences in synchrony strength regarding different experimental conditions given by anesthesia and awakelike activity, as well as the orientation selectivity of neurons in V1: we study whether similarities among neurons, such as the similarity in preferred orientation, affect the strength of correlated activity. We perform the analysis in a group of neurons recorded from V1 of an anesthetized adult cat. A method is proposed to investigate whether the mode of activity (anesthesia or awakelike), and the affinity in orientation selectivity of a given pair of neurons, have a determinant influence in how neuronal synchronization evolves.
The data are synchrony measurements computed at any time point. Therefore, the data are curves varying continuously over time. This is the reason why, in this paper, the word functional will be used to denote the nature of the data and not to make reference to the neurophysiology of the neurons under study. This term comes from statistics, where “functional data analysis” is a widely developing research field.
A functional twoway analysis of variance is proposed. Functional data analysis tools, based on CuestaAlbertos and FebreroBande [20], are used. The method is adapted to consider the dependence that exists among the data because of the experimental setting. A parametric bootstrap is proposed for hypothesis testing.
Methods
In this section, the data are presented. Also, the synchrony measure used to obtain the functional data is described, as well as the statistical methodology used to cope with the functional analysis of variance (ANOVA) model.
Dataset
Data were recorded from an anesthetized and paralyzed adult cat. A microelectrode array with eight independent movable electrodes was introduced into the primary visual cortex of the animal for neuronal recording. Another two microelectrodes were introduced into the brainstem and basal forebrain for electrical stimulation. These stimulations, which we denote as bs (when the brainstem is stimulated) and bf (when the basal forebrain is stimulated), provoked a change in cortical activity from anesthesia to an awakelike pattern. All experiments followed the guidelines of the International Council for Laboratory Animal Science and the European Union (statute nr 86/809) and the protocols were approved by the University of A Coruña Committee on Animal Care.
At the beginning of each recording, neurons were characterized regarding their preferred orientation. Drifting gratings were used to visually stimulate the cat while the firing activities of a group of neurons were recorded. Each grating corresponded to an angle, which we call orientation, with a specific direction of movement. Orientation (and direction) are continuous variables; however, owing to the nature of the experiments, they will here be considered as discrete. Sixteen possible orientationdirection gratings were used: eight orientations with two possible directions each. For example: a drifting grating at 90° (the lines composing the grating are, therefore, vertical) that moves from right to left is a possible orientationdirection stimulus; another moving from left to right is a different one. Although the use of the two possible directions is also of interest in the study of other properties of V1 neurons (for example, the selectivity to direction), in this work we focus our analysis on the orientation selectivity. Hence, there were eight possible values for orientation: 0°,22°,45°,67.5°,90°,112.5°,135° and 157.5°. So, each recorded neuron was associated with one orientation (the preferred one), corresponding to its maximum firing rate. However, we still need to go one step further, as the objective of the study is to evaluate the effect of orientation selectivity on neural synchrony. To achieve this aim, each pair of neurons is identified with a value of a variable, G, which is defined as the difference between the preferred orientations of the neurons that form the pair. That is, if O_{1} and O_{2} are the preferred orientations of two neurons, we define G=min{O_{1}−O_{2},180°−O_{1}−O_{2}}. Given the previous considerations, G can take one of these five possible outcomes: 0°,22.5°, 45°, 67.5° and 90°.
Throughout the paper, we will denote the number of neurons in a simultaneously recorded group by n and the number of possible pairs by r=\frac{n(n1)}{2}. The number of experimental conditions will be denoted by K, the number of trials in each of these conditions will be L and N will denote the total sample size: N=K r L. For our data, n=8, r=28, K=2, L=4 and N=224. The firing rates of the test neurons varied from less than 1 Hz to around 7 Hz; however the most common values ranged from 1 to 3 Hz, denoting typically low firing activity.
Synchrony measure
To measure synchrony, we used a crosscorrelation based measure defined as the area under the crosscorrelation function in a neighborhood around zero. Data were recorded under spontaneous activity, which is characterized by its very low firing activity. Exact spiking coincidences hardly ever occur, although the global activity of the brain is highly synchronized. By considering the area under the normalized crosscorrelation function in an interval around zero we allow for a more relaxed definition of synchronous spike trains than that of just exactly coincident events.
Integrated crosscorrelation synchrony index (ICCSI)
Let {\mathcal{X}}_{1}={\left\{{X}_{i}^{1}\right\}}_{i=1}^{{J}_{1}} and {\mathcal{X}}_{2}={\left\{{X}_{j}^{2}\right\}}_{j=1}^{{J}_{2}} be two simultaneously recorded spike trains in the time interval [0,T]. That is, {X}_{i}^{1} is the time when the ith spike of train 1 occurred, and similarly for {X}_{j}^{2}. Let U_{ i } and U_{−i} be the waiting times from a spike in train _{1} to the ith subsequent and the ith preceding spike in train _{2}, respectively. The probability density functions of these random variables are called the forward and backward crossinterval densities of order i, respectively, and we denote them by η_{ i }(τ) and η_{−i}(τ). The crosscorrelation function ζ(_{1},_{2};τ) is defined as the sum of crossinterval densities of all orders:
where τ is an arbitrary value, the waiting time. The normalized crosscorrelation function
represents the probability density function of the time from an event in train _{1} to an event randomly chosen in train _{2}[21].
The observed spike trains can be used to estimate the crossinterval densities. Empirical normalized crosscorrelograms are used to estimate the normalized crosscorrelation function. The crosscorrelogram, \widehat{\zeta}({\mathcal{X}}_{1},{\mathcal{X}}_{2};\tau ), is built as the histogram (or a smooth version, such as a kernel estimation) of the observed waiting times between the spikes of the first neuron and the spikes of the second neuron. Usually, joint firing, or closeintime firing, is the event of interest so only small values of τ in (1) are taken into account. Then, we consider the normalized (in [−v,v]) crosscorrelogram, \widehat{\mathcal{T}}({\mathcal{X}}_{1},{\mathcal{X}}_{2};\tau ), as follows:
The intuitive definition of synchrony involves the event of two neurons firing together. Under low firing rates, the spikes corresponding to two highly synchronized neurons do not appear exactly at the same time, although they may follow a similar pattern. In this context, flexible tools are needed to capture synchrony. We use the integral of the crosscorrelogram around zero:
where δ<1 is an arbitrary small number chosen by the researcher. In this way, we allow synchrony to be based on delayed firing and not only on simultaneous firing. Integrating \widehat{\mathcal{T}}({\mathcal{X}}_{1},{\mathcal{X}}_{2};\tau ) in a neighborhood of zero we account for spikes that occur closely in time, though not exactly at the same time.
To study the evolution of synchrony in time, sliding windows can be used. That is, if the total observational time of the spike trains is [0,T], synchrony at time t∈[w,T−w] can be estimated by computing Y(_{1},_{2}) in the time window (t−w,t+w]. Using a time grid, 0<t_{1}<…<t_{ M }<T, and computing the ICCSI at each time point of the grid, synchrony becomes a function of time: Y(_{1},_{2};t). Details on functional data analysis are not given here, though we outline the basic notions that are necessary to understand our analysis. For details and theory on this subject, we refer the reader to, for example, the books [22–24].
The number of pairs in each of the categories given by G (0°, 22.5°, 45°, 67.5° and 90°) is 5,12,6,3 and 2 respectively. Therefore, there are 20,48,24,12 and 8 curves in each category defined by stimulus and orientation. The curves can be evaluated over as many points as desired. Points in an equispaced grid 0<t_{1}<…<t_{ M }=T are considered: from 10 s to 230 s every 0.1 s. Therefore, each synchrony curve is evaluated over 2201 points. Let Ψ={(i,j):i,j∈1,…,n and i<j} and denote the pairs of neurons with indices (i,j)∈Ψ. We will denote by {Y}_{\mathit{\text{kl}}}^{(i,j)}(t) the lth trial for the curve Y(_{ i },_{ j };t) under the kth stimulus; k=1,2 and l=1,2,3,4. The curves are bounded, since 0\le {Y}_{\mathit{\text{kl}}}^{(i,j)}(t)\le 1\phantom{\rule{1em}{0ex}}\forall t\in \left[0,T\right]. Figure 1 shows the data averaged over trials. The top panel shows the functions that correspond to bs stimulation and the bottom panel shows the ones for bf stimulation. Different colors are used for the different levels of G.
We search for population differences in the dynamics of the awakelike period induced by each stimulus, taking into account the possible effect of the other factor: difference between orientation selectivity. This problem can be dealt with as a functional twoway ANOVA with a twolevel factor: stimulus and a fivelevel factor: G.
Functional ANOVA
As already mentioned, the aim of this work is to search for differences in synchrony dynamics relative to two factors: stimulus and difference in orientation selectivity. As the difference in orientation selectivity can take only values in a given finite set of values, the problem is one of a twoway analysis of variance with fixed effects in which the response variable is functional. In the following subsection, we outline the methods.
The random projection method for the ANOVA model
The random projection approach in the functional data context is based on the ideas of CuestaAlbertos et al. (2007) [25]. These authors give an extension, on Hilbert spaces, to the CramerWold theorem, which characterizes a probability distribution in terms of onedimensional projections. Their Theorem 4.1 states that if two distributions are different, and we choose a marginal of them at random, those marginals will almost surely be different. Based on this fact, CuestaAlbertos and FebreroBande [20] propose a method for hypothesis testing in infinite dimensional spaces. We will state their result more formally, particularizing it to our problem. Let us assume the data belong to a Hilbert space,, with a scalar product 〈, 〉, and let μ_{ G } be a Gaussian distribution in. Suppose the hypothesis to be tested is whether certain parameters, say γ_{1} and γ_{2}, are equal (H_{0}:γ_{1}=γ_{2}). If γ_{1}≠γ_{2}, then the set of random directions, ν, from μ_{ G } in, for which 〈ν,γ_{1}〉=〈ν,γ_{2}〉, has probability zero. That is, if H_{0} fails, then it also fails in its projected version for μ_{ G }—almost every ν∈. Therefore, a test at level a in the onedimensional space is a test at the same level to test H_{0}.
We now present the functional twoway ANOVA model for our problem and state the methodology more formally. We consider the following linear model for the synchrony curves:
for k=1,2 and l=1,2,3,4. The function {Y}_{\mathit{\text{kl}}}^{(i,j)}(t) is the ICCSI for trial l of the pair given by neurons i and j under stimulus k; m(t) is the global effect curve and α_{ k }(t) is the effect of stimulus k. The function g:Ψ↦{1,2,⋯,5} indicates the level of the factor G that corresponds to the pair given by neurons i and j, identifying level 1 to 0°, level 2 to 22.5° and so on. Therefore, β_{g(i,j)} is the effect of level g(i,j) on the synchrony curve. The effect of a possible interaction between the factors is gathered by γ_{k g(i,j)} and, finally, {\epsilon}_{\mathit{\text{kl}}}^{(i,j)}(t) is the random error term. For parameter identifiability, we assume:
The relevant null hypotheses to be tested are:
which means that there is no effect of the stimulus and
which states that there is no effect of the orientation selectivity. Also, a hypothesis for the interactions is interesting:
Theorem 2.1 in [20] states that, if the data belong to a Hilbert space,, endowed with a scalar product 〈, 〉, μ_{ G } is a Gaussian distribution on such that its onedimensional projections are nondegenerate, then,

1.
If ∃k∈{1,2}, such that α _{ k }≠0, then
{\mu}_{G}(\{\nu \in \mathcal{\mathscr{H}}:\u3008\nu ,{\alpha}_{k}\u3009=0\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\forall k\in \{1,2\}\})=0 
2.
If ∃g∈{1,2,3,4,5}, such that β _{ g }≠0, then
{\mu}_{G}(\{\nu \in \mathcal{\mathscr{H}}:\u3008\nu ,{\beta}_{g}\u3009=0\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\forall g\in \{1,2,\dots ,5\}\})=0 
3.
If ∃k∈{1,2} and g∈{1,2,3,4,5} such that γ _{ k g }≠0, then
\phantom{\rule{4.0pt}{0ex}}{\mu}_{G}(\{\nu \in \mathcal{\mathscr{H}}:\u3008\nu ,{\gamma}_{\mathit{\text{gk}}}\u3009=0\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\forall k\in \phantom{\rule{0.3em}{0ex}}\{1,2\},\forall g\in \phantom{\rule{0.3em}{0ex}}\{1,2,\dots ,5\}\})=0
Therefore, the proposed procedure is to randomly project the data on the onedimensional space and to test the hypotheses in that context. Given a random function, ν(t), and denoting the projection of a function f∈ in the direction of ν as f^{(ν)}, 〈ν,f〉, we consider the projected model:
and the hypotheses in the onedimensional problem:
The tests defined in the onedimensional response case are clearly conditional on the random projection used, ν. To reduce the error introduced by the choice of the random projection, we will use the correction that implies controlling the false discovery rate (FDR) introduced by Benjamini and Hochberg (1995) [26]. This method is also recommended by CuestaAlbertos and FebreroBande [20]. In particular, we use the FDR procedure that arises from the work of Benjamini and Yakutieli (2001) [27]. Given the ordered pvalues p_{(1)}<…<p_{(s)} obtained using s random projections, we will choose the corrected pvalue as the quantity \text{min}\left\{\frac{s}{i}{p}_{(i)},i=1,\cdots \phantom{\rule{0.3em}{0ex}},s\right\}, where min stands for the minimum of a set.
So far, this test can help in the search for global differences between the two groups of curves, although we would rather study how these differences change in time. To do this, we propose the use of moving windows along time. For each time point, t, we consider an interval of time, centered at t, and project the pieces of curves that correspond to that interval, to perform the ANOVA test.
The hypotheses in (6) can be tested with any regular ANOVA approach. Nevertheless, some caution is needed, as the errors in our model cannot be assumed to be independent, as we discuss in the next subsection.
ANOVA model with dependent errors
In this section we introduce the problem of the dependence that is present in the data. The dependency comes from the fact that the data are observed at the neuronpair level. So, it is only fair to think that the curves obtained from two pairs of neurons with one cell in common could be correlated.
Since we work with the projections of the Y functions, we can forget about the infinite dimensional problem and focus on the onedimensional one. Let us consider the model in (5), dropping the superscript (ν) for simplicity:
with (i,j)∈Ψ, k∈{1,2} and g∈{1,2,…,5}. Model (7) is a twoway ANOVA model with a twolevel first factor and a fivelevel second factor, with unbalanced cells, as we do not observe the same amount of pairs of neurons with differences in orientation selectivity. The following representation of the problem is more convenient. Consider each {Y}_{k}^{(i,j)}, i.e., the synchrony of a given pair of neurons under each stimulus, as a random variable. In this way, we have four realizations of each variable, so our linear model becomes another linear model with 2r variables and L observations each. The model can be written in matrix form, as follows:
where \mathbf{y}\in {\mathbb{R}}^{N\mathrm{x}1} are the data ordered in a convenient form, \mathbf{X}\in {\mathbb{R}}^{N\mathrm{x}q} is the design matrix, \mathbf{\theta}\in {\mathbb{R}}^{q\mathrm{x}1} is the vector of parameters, θ^{T}=(m,α_{1},β_{1},β_{2},β_{3},β_{4},γ_{11},γ_{12},γ_{13},γ_{14},γ_{15},γ_{21},γ_{22},γ_{23},γ_{24}). The parameters α_{2}, β_{5} and γ_{25} are not included in the definition of θ because they are superfluous by the conditions defined in (4). Finally, \mathbf{\epsilon}\in {\mathbb{R}}^{N\mathrm{x}1} is the vector of errors. Here, we consider the data ordered as follows:
and therefore the vector of errors, ε, has the form:
The assumptions of normality and homoscedasticity for the errors are reasonable in this context, but the fundamental problem in this study arises from the presence of dependence among the data that comes from pairs of neurons sharing a cell. That is, {Y}_{k}^{(i,j)} and {Y}_{k}^{({i}^{\prime},{j}^{\prime})} are dependent if {i,j}∩{i^{′},j^{′}}≠∅. The errors of the model are normally distributed with zero mean and covariance matrix Σ. We assume that the variance of {\epsilon}_{k}^{(i,j)} is the same for all (i,j) and all k and equal to σ^{2}. On the other hand, we also assume that \text{cov}({\epsilon}_{k}^{(i,j)},{\epsilon}_{k}^{({i}^{\prime},{j}^{\prime})})=\rho {\sigma}^{2} whenever #({i,j}∩{i^{′},j^{′}})=1, where # denotes the cardinal (number of elements of a set). Let Ω={(i,j,k,l,i^{′},j^{′},k^{′},l^{′}):#({i,j}∩{i^{′},j^{′}})=1,k=k^{′},l=l^{′}} then, in summary:
Therefore, Σ results in a very special matrix, with σ^{2} in the diagonal, and σ^{2}ρ where the variable in the column and the variable in the row share a neuron and also share a trial (and, thus, a stimulus). That is, Σ is a N×N matrix composed by a diagonal of n r×r blocks, and the rest of the elements are zeros. The blocks in the diagonal are all equal and equal to the covariance matrix, σ^{2}C, of the data that correspond to one level of the first factor (stimulus):
With C=C(ρ) in our particular example (r=28):
With this definition, Σ is positive definite for small values of ρ. The eigenvalues of the matrix in (10) result in only three different values: (1−2ρ)σ^{2}, (1+4ρ)σ^{2} and (1+12ρ)σ^{2}, which, for ρ∈[0,1], are all greater than zero at the same time, whenever ρ<0.5. This means that this correlation structure for the data is plausible only for ρ∈ [ 0,0.5).
The Fstatistic
Our ANOVA problem comprises the hypothesis test that some of the parameters in model (8) are equal to zero. In matrix form, this can be written as
where, in the case of the stimulus effect,
and, in the case of the orientation selectivity, the hypothesis matrix is
and, in both cases, h is a vector of zeros. Let us denote by q the total number of parameters in the complete model and let q_{0} be the number of parameters, different from zero, under the null hypothesis.
Let the residual sum of squares function, in matrix form, be denoted by Q(θ):
Denote by \widehat{\theta} the estimate of θ in the complete onedimensional model and let \stackrel{~}{\theta} be the estimate for the reduced model. That is, \stackrel{~}{\theta} is the vector of parameters that minimizes Q ( θ ) under the restriction given by (11). Then, the classical ANOVA F statistic is
where d_{1} denotes the degrees of freedom of the numerator, that is, q−q_{0}, and d_{2} the degrees of freedom of the denominator: N−(q+1).
The test statistic, F, under assumptions of independence, normality and homoscedasticity, follows an F distribution with q−q_{0} and N−(q+1) degrees of freedom. In our context, as the errors of the model cannot be assumed to be independent, the test statistic does not have an {F}_{{d}_{1},{d}_{2}} distribution and, therefore, we need to calibrate the null distribution of the test statistic. We propose a parametric bootstrap procedure to calibrate the distribution of the ANOVA test statistic under the null hypothesis.
Estimation of the correlation coefficient
Since we assume that the covariance between the different pairs of error terms are equal, provided the pairs belong to Ω, we will estimate the correlation coefficient as the average of the Pearson correlation coefficient of the corresponding pairs, which is equivalent to:
where {\widehat{\epsilon}}_{\mathit{\text{kl}}}^{(i,j)} are the elements of the residual vector: \widehat{\epsilon}=\mathbf{y}\mathbf{X}\widehat{\theta} and {\widehat{\sigma}}^{2} is the estimated residual variance:
Direct bootstrap
As the data are normally distributed, we propose the use of a parametric bootstrap to calibrate the distribution of the ANOVA test statistic under the null hypothesis. We now describe the procedure for a general null hypothesis {H}_{0}^{i}:{\theta}_{i}=0.
Once the linear model has been fitted to obtain \widehat{\theta} and the classical ANOVA statistic has been computed (denoted by F^{obs}), we estimate σ^{2} and ρ from the residuals to build the estimated covariance matrix, \widehat{\mathbf{\Sigma}}=\mathbf{\Sigma}({\widehat{\sigma}}^{2},\widehat{\rho}). We proceed with the following bootstrap algorithm:

1.
Replace the ith parameter in the estimated \widehat{\mathbf{\theta}} by zero (null hypothesis). This set of parameters will be denoted by {\widehat{\mathbf{\theta}}}_{0}. Build a bootstrap sample: {\mathbf{y}}^{\ast}=\mathbf{X}{\widehat{\theta}}_{0}+\mathbf{z} with \mathbf{z}\sim N(0,\widehat{\mathbf{\Sigma}}).

2.
Fit the linear model to the resample, obtain the bootstrap version of the estimated parameters {\widehat{\theta}}^{\ast} and compute F^{∗}, the bootstrap version of F.

3.
Repeat Steps 1–2 B times to obtain F^{∗1},…,F^{∗B}.

4.
Compute the desired (1−a)quantile of the bootstrap versions, {\mathbf{F}}_{1a}^{\ast}. We reject the null hypothesis if {\mathbf{F}}^{\mathit{\text{obs}}}>{\mathbf{F}}_{1a}^{\ast}.
Results and discussion
In this section we show the results of applying the random projections method to the functional ANOVA problem at hand. To draw the random vectors we use Brownian motions or, more precisely, approximations to standard Brownian motions by sequences of partial sums of normally distributed random variables. We only need to compute the values of the random vectors in the equidistant time points, t_{1},⋯,t_{ M }, where the functions \u0176(t) are defined. For this, we consider M independent and identically distributed standard normal random variables, Z_{1},…,Z_{ M }, and define a trajectory ν_{1} as
On the other hand, we would like to have directions without tendency and such that their variability throughout the trajectory does not change too much. For this aim, we define the random trajectories as the sum of two Brownian motions, as just defined, where one of them has been “flipped" so as to be equal to zero in the last time point t_{ M }. That is, let ν_{1} and ν_{2} be defined as in (14) and let ν_{3}(t_{ k })=ν_{2}(t_{M−k+1}). The final directions we use are defined as ν(t)=ν_{1}(t)+ν_{3}(t).
A preliminary analysis, fitting model (7), showed that the interaction between factors was not significant. Therefore, the final model considered is:
with k∈{1,2}, g(i,j)∈{1,2,…,5}, and l∈{1,2,3,4}. Figure 2 shows the pvalues obtained by using sliding windows across time to study the evolution of the effects of both factors. A 40s time window was considered, moving along the time axis (in seconds) from 20 s of recording to 215 s. In the time period between 110 s and 150 s, this was done every second; for the rest, it was done every 5 s. At each window, 30 random directions were used to project the data (the same ones in every window) and the FDR correction was applied, resulting in just one pvalue. It is clear that there are differences between the two approaches used. When dependence is not taken into account (red lines) the test is less conservative than when dependence is included. Although for the effect of the stimuli there is a period of time at the beginning of the awakelike period (right after stimulation) for which both tests reject the null hypothesis, next there is another period in which it would be rejected if dependence was not accounted for. The results show that a period of time exists during the awakelike mode when the difference between the effects of the two stimuli is significant. This result reinforces the view that there are important differences in the physiology and dynamics of the two activating pathways: bs and bf. On the other hand, the differences in synchrony among the levels of the factor G were also found to be significant after the stimulus.
The estimation of the correlation coefficient changes for each window; nevertheless, the estimation is not very variable, even from one projection to the other. Figure 3 shows the \widehat{\rho} as a function of time and their mean across projections. We can observe that, at the beginning of the recording, the estimated correlation coefficients were greater than 0.5 and they were truncated for the covariance matrix, \hat{\mathbf{\Sigma}}, to be positive definite.
When large correlation is present (ρ>0.5) an alternative, nonparametric, bootstrap can be carried out. In the following procedure, each set of bootstrap residuals is defined as the set of original residuals of a trial chosen at random with equal probability from all possible trials:
1. For each k∈{1,2} and l∈{1,2,3,4} draw the bootstrap pair (k^{∗},l^{∗}) with equal probability from {1,2}×{1,2,3,4}, that is, P({k}^{\ast}={k}^{\prime},{l}^{\ast}={l}^{\prime})=\frac{1}{8} for all k^{′}∈{1,2,} and all k^{′}∈{1,2,3,4}.
2. Define {\widehat{\epsilon}}_{k,l}^{(i,j)\ast}={\widehat{\epsilon}}_{{k}^{\ast},{l}^{\ast}}^{(i,j)}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\forall (i,j).
This bootstrap procedure has the drawback that, in our case, the vector of bootstrap residuals can only take eight possible values. A possible improvement is to use a smoothed version. To achieve this, a smoothing parameter h, typically small with respect to the standard deviation of the residuals, is chosen and Step 2 is replaced by:
2. Define {\widehat{\epsilon}}_{k,l}^{(i,j)\ast}={\widehat{\epsilon}}_{{k}^{\ast},{l}^{\ast}}^{(i,j)}+{\mathit{\text{hZ}}}_{k,l}^{(i,j)\ast} with {Z}_{k,l}^{(i,j)\ast}\sim N(0,1) iid ∀(i,j).
The green curve in Figure 2 represents the pvalues obtained with an alternative bootstrap algorithm based on an approximation to the real distribution of the F statistic. This alternative method, which we call the χ^{2}based bootstrap, reduces computational time considerably. It involves some theoretical insight that we describe in the Appendix. The main result is based on the fact that the test statistic F can be expressed as a ratio of quadratic forms on the errors of the model. There are plenty of results concerning the exact distribution of quadratic forms on normal vectors. For example, Provost et al. [28] derive the exact density function of an indefinite quadratic form in noncentral vectors, which allows us to derive the distribution of ratios of quadratic forms as those involved in ANOVA tests. Nevertheless, the closed formulas for the density functions of quadratic forms are complex and not practical. On the other hand, the distribution of the numerator and the denominator that give shape to the F statistic are quite easy to approximate by Montecarlo, as is also described in the Appendix. We use this approach to carry out, in the next two sections, an evaluation of the test. First, we compare the distribution of the test statistic when dependence is either taken into account or not. Finally, we perform a simulation study to evaluate the performance of the bootstrap test.
Distribution of the test statistic F
To visualize the differences between the distribution of the test statistic, F, either when taking into account dependence or when independence between observations is assumed (F distribution), Figures 4 and 5 show the density of F under the null hypotheses, approximated by Montecarlo in the numerator and denominator of (20). The model used is the same as in the real data case; i.e., y=X θ+ε given by (15). The real data scenario was reproduced by constructing the model for eight neurons and four trials under each stimulus. The values of the second factor were exactly as in the real case. Moreover, the errors were assumed ε∼N(0,Σ) with Σ defined as in (10) with σ^{2}=1 and different values for the correlation coefficient: ρ=0,0.1,0.15 and 0.4. The first panel of Figures 4 and 5 show that the null distribution of the test statistic corresponds to the {F}_{{d}_{1},{d}_{2}} distribution (as it should) when ρ=0. Figure 4 shows that, on the other hand, the {F}_{{d}_{1},{d}_{2}} distribution departs from the null distribution of the test statistic when ρ increases. This is evidence for the necessity of using the bootstrap to calibrate the distribution of F instead of using the F distribution. Figure 5 shows the comparison of both densities for the hypothesis on β_{ i }. In this case the difference is not as large as in the previous case. However, in the last panel we can still observe a small deviation. In general we can state that, when ρ increases, the tail of the distribution becomes heavier. This explains why, sometimes, the test using the classical approach rejects while the bootstrap approach does not.
Performance of the test
In this section we perform a simulation study to evaluate whether the type I error (probability of rejecting the null hypothesis given that it is true) of our test is close to its nominal value (a=0.05). Also, we evaluate the power of the test, which is the probability that it correctly rejects the null hypothesis when the null hypothesis is false. For this aim, we simulated data, similar to real data, using different (known) model parameters and correlation coefficients. With the simulated data, we computed F and followed the procedure used to calibrate the distribution of the test statistic with level a=0.05. Finally, we compared the results (acceptance/rejection of the null hypothesis) with the true case. The data were generated as if three trials under two stimulation conditions of a group of seven neurons had been recorded. Each neuron was assigned a given fixed characteristic (orientation selectivity) so that the second factor (difference in orientation selectivity) could be computed. In this case, the preferred orientations were defined with only two possible values that were assigned arbitrarily to each neuron. Then,
where θ=(m,α_{1},β_{1})^{t}. The covariance matrix, Σ, was defined as in (9) for four values of ρ. The values used for the simulations are m=0, α_{1}=0,0.25,0.5, β_{1}=0,0.25,0.5 and ρ=0,0.05,0.15,0.35.
A large variance (σ^{2}=1) with respect to the parameters was chosen to reflect a typical situation for the real data. For most of the projections, the signal to noise ratio is rather small. This large variance might affect the ability of the test to detect differences between the parameters. M=5000 Montecarlo simulations were performed and for each one, B=500 bootstrap trials were used. The results are shown in Tables 1, 2 and 3. Table 1 shows the proportion of rejections when testing for no effect of the first factor. It can be observed that the nominal level of the test is not well respected under dependence. The test is moderately anticonservative when using the bootstrap, and severely anticonservative when using the F distribution. When using the bootstrap, the rejection percentage under the null remains close to the nominal even when increasing the dependence. This is not the case when using the F distribution, since the rejection percentage under the null is inadmissibly large. Table 2 shows the proportion of rejections when testing for no effect of the second factor. In this case, the bootstrap respects the nominal level reasonably well when testing for β_{1}. However, the test based on the F distribution is very conservative under large dependence. Regarding the power of the test, Table 1 shows that the proportion of rejections decreases when ρ increases for both tests. It is noticeable that the power of the test based on the F distribution is rather larger than that of the bootstrap test, especially for large correlations. Nevertheless, the fact that the level of the test is respected a lot better by the bootstrap makes this method more appropriate. Table 2 shows that, surprisingly, the value of the correlation parameter does not influence the results, regarding power, when testing for β_{1}. For β_{1}=0.5, we can observe near to 100% rejection under the alternative in almost all cases. Table 2 also shows how the power of the bootstrap test is better compared with the one calibrated with the F distribution.
Finally, we show similar simulation results when testing for the interaction between the two factors. Table 3 shows the proportion of rejections under the null hypothesis for different values of ρ and 5000 Montecarlo replications. We can observe that the nominal level is successfully met by the bootstrap test for small correlation values. Although for the case of large correlation values the test seems to be conservative, it outperforms the F test remarkably. It is important to note that, regarding the power of the test, in these simulations the test rejected the null hypothesis 100% of the time (results not shown), using values of γ=0.5,1 to simulate data in the alternative.
Conclusions
In this work we proposed a functional twoway ANOVA for neural synchrony curves, using random projection techniques. These methods are very easy to implement and interpret, which makes them appealing for applying to many problems. The method was shown to be useful as it allowed the significance effects of the factors under study to be addressed.
The model under study involves synchrony curves obtained by a crosscorrelation based method. The curves were separated into groups given by the stimuli and the difference in preferred orientation between the two neurons involved in each curve. Differences between the levels of this second factor were also of interest. Although there were groups with very few elements, several conclusions can be established.
The importance of including the dependence between curves in the analysis was shown. The distribution of the test statistic was approximated using a parametric bootstrap on the residuals of the model, allowing for dependence. The classical F test statistic can lead to false positives, as the distribution of the test statistic has a heavier tail than the F distribution. Two algorithms were presented to carry out the bootstrap: a direct resampling plan, which resamples from the model, and a second one, based on the fact that the F statistic has the same distribution as that of a ratio of particular linear combinations of {\chi}_{1}^{2} variables. The second algorithm has a substantially lower computational cost than the first one.
Regarding the real data, the interactions between the factors were not statistically significant. An effect of the factor stimulus can be found at the beginning of the awakelike period (a few seconds after the stimulus onset). To make a statement regarding the differential effect that bs and bf have on pairwise synchrony, the analysis of more groups of neurons would be necessary. However, this work shows that the question is worth asking, as we were able to find evidence of these differences after the switch from anesthesia to the awakelike mode. In fact, taking into account the ability of bs and bf to promote wakefulness with a similar effect on ECoG power spectral density [29], the robustness of the proposed statistical method (to be applied in lowactivity cortical single units recorded simultaneously) allows us to find some differences in neural pattern synchrony, consistent with physiological data as follows. Activation of the parabrachial nucleus in the bs enables thalamic relay neurons to disrupt cortical synchronization via glutamate release [30]. In contrast, bf stimulation induces not only ACh release, but also GABA, glutamate and NO in the cortex [18, 31]. Thus, although the effect that bs and bf activation has on ECoG dynamics has been characterized as regulatory on cortical activation, the different operational mechanism of each system could be reflected in the temporal coordination between neurons. Finally, a relevant issue in neurophysiology relates to the type of information processing carried on by primary visual cortex neurons [32–34]. Briefly, some basic properties of the visual processing—like orientation selectivity—could be based either on highly refined and specific anatomical connections or, in contrast, could be carried out by distributed computational processes at the cortical level. The results presented here show that the effect of the difference in orientation selectivity was significant throughout all the generated awakelike activity, suggesting that the strength of the connectivity was dependent on the orientation selectivity of primary visual cortex neurons, thus favoring the second hypothesis.
Overall, this work is a contribution to the development of statistical tools for neuroscience. Although the methods we propose here have been focused on a particular problem, they are applicable in many other contexts. It is worth to mention that differences were found even though the firing activity of the test group was considerably low, showing the method can be useful when low firing rates are present. Also, bootstrap techniques are very powerful and easy to implement (although, admittedly, they can be computationally expensive), and might be a good alternative to parametric inference using minimal mathematical assumptions.
Appendix  χ^{2}based bootstrap
Here we present the results that provide a computationally more efficient way to perform the bootstrap test. First of all, let us rewrite F in a more convenient form. Recall
where \widehat{\theta} is the estimator of the parameter vector θ in the unconstrained model, while \stackrel{~}{\theta} is the estimator under the null hypothesis (H θ=h). It can be shown that \widehat{\theta} and \stackrel{~}{\theta} relate as follows:
where
and,
which implies that
Moreover, we can write this expression as a quadratic form based on the errors of the model by noticing that:
Therefore,
and finally,
In a similar way, the sum of squares of the denominator can be expressed as
From (16) and (17) we obtain:
Let us denote the matrix in the last expression of (18) by
and the final matrix in (17) by
Finally, we can give the expression for the F statistic in terms of quadratic forms on normal vectors:
where ε∼N(0,Σ).
Let us denote Z=ε^{t}A_{1}ε. Also, let S be a matrix such that Σ=S S^{t} and let R=S^{−1}ε. We denote the eigenvalues of S^{t}A_{1}S, or equivalently those of Σ A_{1}, by λ_{1},…,λ_{ N } and by P, an orthogonal matrix whose columns are the eigenvectors of S^{t}A_{1}S. So, Z=R^{t}S^{t}A_{1}S R=R^{t}P D P^{t}R=ε^{t}(S^{−1})^{t}P D P^{t}S^{−1}ε=V^{t}D V, where V=P^{t}S^{−1}ε and D is a diagonal matrix with diagonal elements λ_{1},…,λ_{ N }. It follows that V∼N(0,I) and, therefore,
where {V}_{i}^{2}\sim {\chi}_{1}^{2} for i=1,…,N.
The same argument is true for the quadratic form in the denominator of F. Thus, the distribution of ε^{t}A_{2}ε is the same as the distribution of {\sum}_{i=1}^{N}{\mu}_{i}{W}_{i}^{2}, where μ_{1},…,μ_{ N } are the eigenvalues of Σ A_{2} and {W}_{i}^{2}\sim {\chi}_{1}^{2}, i=1,…,N. Consequently, the ratio
has the same distribution as F. The distribution of (20) can be approximated by Montecarlo.
In practice, Σ is not known, so the λ_{ i } and μ_{ i } cannot be computed. However, Σ can be replaced by \hat{\Sigma}=\mathbf{\Sigma}({\widehat{\sigma}}^{2},\widehat{\rho}) as in (10). Thus, {\widehat{\lambda}}_{i} and {\widehat{\mu}}_{i} (the eigenvalues of \hat{\Sigma}{A}_{1} and \hat{\Sigma}{A}_{2}) can be used in (20). Observe that this is equivalent to the original bootstrap resampling plan as presented above. This gives an alternative method for performing the bootstrap that is much less time consuming. For instance, in the example shown in Figure 2, the results obtained with the direct bootstrap are reproduced, except for minor differences due to Montecarlo error, and the computational time is at least 30 times smaller with this alternative method.
References
Kuhn A, Aertsen A, Rotter S: Higherorder statistics of input ensembles and the response of simple model neurons. Neural Comput. 2003, 15: 67101.
Steinmetz PN, Roy A, Fitzgerald PJ, Hsiao SS, Johnson KO, Niebur E: Attention modulates synchronized neuronal firing in primate somatosensory cortex. Nature. 2000, 404: 187190.
Stopfer M, Bhagavan S, Smith BH, Laurent G: Impaired odour discrimination on desynchronization of odourencoding neural assemblies. Nature. 1997, 390: 7074.
Vaadia E, Haalman I, Abeles M, Bergman H, Prut Y, Slovin H, Aertsen A: Dynamics of neuronal interactions in monkey cortex in relation to behavioral events. Nature. 1995, 373: 515518.
Zohary E, Shadlen M, Newsome W: Correlated neuronal discharge rate and its implications for psychophysical performance. Nature. 1994, 370: 140143.
Gerstein G, Perkel D: Simultaneously recorded trains of action potentials  Analysis and functional interpretation. Science. 1969, 164: 828830.
Grün S, Diesmann M, Aertsen A: Unitary events in multiple singleneuron spiking activity: I. Detection and significance. Neural Comput. 2002, 14: 4380.
Grün S, Diesmann M, Aertsen A: Unitary events in multiple singleneuron spiking activity: II. Nonstationary Data. Neural Comput. 2002, 14: 81119.
Faes C, Geys H, Molenberghs G, Aerts M, CadarsoSuárez C, Acuña C, Cano M: A flexible method to measure synchrony in neuronal firing. J Am Stat Assoc. 2008, 103: 149261.
GonzálezMontoro AM, Cao R, Faes C, Molenberghs G, Espinosa N, Cudeiro J, Mariño J: Cross nearestspike interval based method to measure synchrony dynamics. Math Biosci Eng. 2014, doi:10.3934/mbe.2014.11.27.
Grün S: Datadriven significance estimation for precise spike correlation. J Neurophysiol. 2009, 101: 11261140.
Ventura V, Cai C, Kass RE: Statistical assessment of timevarying dependence between two neurons. J Neurophysiol. 2005, 94: 29402947.
Steriade M: Sleep oscillations and their blockage by activating systems. J Psychiat Neurosci. 1994, 19: 354358.
Steriade M, Gloor P, Llinás RR, Lopes da Silva FH, Mesulam MM: Basic mechanisms of cerebral rhythmic activities. Electroencephalogr Clin Neurophysiol. 1990, 76: 481508.
Brown EN, Lydic R, Schiff D: General anesthesia, sleep, and coma. N Engl J Med. 2010, 363: 26382650.
Franks NP: General anaesthesia: from molecular targets to neuronal pathways of sleep and arousal. Net Rev Neurosci. 2008, 9: 370386.
Sleigh JW, Voss L, SteynRoss ML, SteynRoss DA, Wilson MT: Modelling sleep and general anaesthesia. Springer Ser Comp Neurosci. 2011, 15: 2141.
Mariño J, Cudeiro J: Nitric oxidemediated cortical activation: A diffuse wakeup system. J Neurosci. 2003, 23: 42994307.
Hubel DH, Wiesel TN: Receptive fields, binocular interaction and functional architecture in the cat’ s visual cortex. J Physiol. 1962, 10: 106154.
CuestaAlbertos J, FebreroBande M: A simple multiway ANOVA for functional data. Test. 2010, 19: 537557.
Perkel DH, Gerstein GL, Moore GP: Neuronal spike trains and stochastic processes. II. Simultaneous spike trains. Biophys J. 1967, 7: 419440.
Ramsay JO, Silverman BW: Applied Functional Data Analysis: Methods and Case Studies. 2002, New York: Springer
Ramsay JO, Silverman BW: Functional Data Analysis. 2005, New York: Springer Series in Statistics
Ferraty F, Vieu P: Nonparametric Functional Data Analysis: Theory and Practice. 2006, New York: Springer
CuestaAlbertos JA, Fraiman R, Ransford T: A sharp form of the CramerWold theorem. J Theor Prob. 2007, 20: 201209.
Benjamini Y, Hochberg Y: Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B. 1995, 57: 289300.
Benjamini Y, Yekutieli D: The control of the false discovery rate in multiple testing under dependency. Ann Stat. 2001, 29: 11651188.
Provost SB, Ridiuk EM: The exact distribution of indefinite quadratic forms iin noncentral normal vectors. Ann Inst Statist Math. 1996, 48: 381394.
Fuller PM, Sherman D, Pedersen NP, Saper CB, Lu J: Reassessment of the structural basis of the ascending arousal system. J Comp Neurol. 2011, 519: 933956.
Steriade M: Arousal: revisiting the reticular activating system. Science. 1996, 272: 225226.
Henny P, Jones BE: Projections from basal forebrain to prefrontal cortex comprise cholinergic, GABAergic and glutamatergic inputs to pyramidal cells or interneurons. Eur J Neurosci. 2008, 27: 654670.
Schummers J, Mariño J, Sur M: Synaptic integration by V1 neurons depends on location within the orientation map. Neuron. 2002, 36: 969978.
Mariño J, Schummers J, Lyon DC, Schwabe L, Beck O, Wiesing P, Obermayer K, Sur M: Invariant computations in local cortical networks with balanced excitation and inhibition. Nat Neurocsi. 2005, 8: 194201.
Stimberg M, Wimmer K, Martin R, Schwabe L, Mariño J, Schummers J, Lyon DC, Sur M, Obermayer K: The operating regime of local computations in primary visual cortex. Cereb Cortex. 2009, 19: 21662180.
Acknowledgments
This work was supported by grant MTM201122392 from the Spanish Ministry of Economy and Competitiveness. AMGM was supported by a “Formación de Personal Investigador” (FPI) grant (BES2009017772) from the Spanish Ministry of Economy and Competitiveness. AMGM, NE and JM were supported by Xunta de Galicia (INCITE09 137 272 PR). The authors would like to thank Professor Juan CuestaAlbertos for his useful comments during the early stage of this research.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
NE, JC and JM conceived the neurophysiological problem and designed and performed the experiments. AMGM and RC designed the model and the hypothesis tests. AMGM implemented the statistical tools, and performed the analyses and computational experiments. AMGM, RC, NE and JM analyzed the results. AMGM and RC drafted the manuscript. All authors revised and approved the final version of the manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
González Montoro, A.M., Cao, R., Espinosa, N. et al. Functional twoway analysis of variance and bootstrap methods for neural synchrony analysis. BMC Neurosci 15, 96 (2014). https://doi.org/10.1186/147122021596
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/147122021596
Keywords
 Crosscorrelation analysis
 Bootstrap
 Spiketrains
 Dependence
 Low firingrate
 Functional data