Skip to main content
Fig. 1 | BMC Neuroscience

Fig. 1

From: Empirical Bayesian significance measure of neuronal spike response

Fig. 1

a Simulated neural network that was used in Experiment 1. Three groups of neurons G1, G2, and G3 construct a recurrent network: G1 \(\rightarrow\) G2 \(\rightarrow\) G3 \(\rightarrow\) G1. Gray scale blocks in the matrix denote a direct functional connectivity between source (pre-synaptic) and destination (post-synaptic) neurons. b–g Performance comparison between GLMs based on sparse estimation with group lasso (gl) and on L2 regularization (l2). Comparison between logistic regression (lr) and Poisson regression (pr) is also performed. The GLM parameters were estimated with various settings of hyperparameters and the best hyperparameter value for each case was determined so as to maximize likelihood on a validation dataset. The results were evaluated with sensitivity and false positive proportion calculated by using a true directed link set for the simulation. Experiments with short (\(T=2000\)) and long (\(T=10{,}000\)) datasets are shown in the upper (b–d) and lower (e–g) sets of panels, respectively. b, e ROC curves are drawn for each case at the best hyperparameter that maximizes likelihood on the validation data. c, f AUC score for each setting of regularization hyperparameter. Markers denote the values at the best hyperparameter that maximizes likelihood on the validation data. d, g Accuracy of model fitting, measured by likelihood on the validation data, for each setting of regularization hyperparameter. Markers denote the values at the best hyperparameter that optimizes model fitting. We found that the differences were small between logistic and Poisson regression models. ROC curves or AUC were not necessarily the largest at the best-tuned values of hyperparameters for each of the four cases. Sparse estimation with the group lasso (gl) was more sensitive to the hyperparameter settings than the L2 regularization (l2)

Back to article page