↓ Skip to main content

Beyond GLMs: A Generative Mixture Modeling Approach to Neural System Identification

Overview of attention for article published in PLoS Computational Biology, November 2013
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (89th percentile)
  • Good Attention Score compared to outputs of the same age and source (73rd percentile)

Mentioned by

blogs
1 blog
twitter
5 X users
facebook
2 Facebook pages
googleplus
1 Google+ user

Citations

dimensions_citation
36 Dimensions

Readers on

mendeley
119 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Beyond GLMs: A Generative Mixture Modeling Approach to Neural System Identification
Published in
PLoS Computational Biology, November 2013
DOI 10.1371/journal.pcbi.1003356
Pubmed ID
Authors

Lucas Theis, Andrè Maia Chagas, Daniel Arnstein, Cornelius Schwarz, Matthias Bethge

Abstract

Generalized linear models (GLMs) represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 119 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Germany 6 5%
United States 3 3%
United Kingdom 1 <1%
France 1 <1%
Unknown 108 91%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 39 33%
Researcher 24 20%
Student > Master 12 10%
Professor > Associate Professor 9 8%
Student > Bachelor 8 7%
Other 20 17%
Unknown 7 6%
Readers by discipline Count As %
Neuroscience 32 27%
Agricultural and Biological Sciences 32 27%
Engineering 13 11%
Computer Science 9 8%
Psychology 7 6%
Other 14 12%
Unknown 12 10%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 12. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 27 April 2021.
All research outputs
#3,154,348
of 25,972,223 outputs
Outputs from PLoS Computational Biology
#2,750
of 9,091 outputs
Outputs of similar age
#33,721
of 317,972 outputs
Outputs of similar age from PLoS Computational Biology
#38
of 146 outputs
Altmetric has tracked 25,972,223 research outputs across all sources so far. Compared to these this one has done well and is in the 87th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 9,091 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 20.3. This one has gotten more attention than average, scoring higher than 69% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 317,972 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 89% of its contemporaries.
We're also able to compare this research output to 146 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 73% of its contemporaries.