↓ Skip to main content

Doing the Impossible: Why Neural Networks Can Be Trained at All

Overview of attention for article published in Frontiers in Psychology, July 2018
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (66th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (55th percentile)

Mentioned by

twitter
8 X users

Citations

dimensions_citation
19 Dimensions

Readers on

mendeley
82 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Doing the Impossible: Why Neural Networks Can Be Trained at All
Published in
Frontiers in Psychology, July 2018
DOI 10.3389/fpsyg.2018.01185
Pubmed ID
Authors

Nathan O. Hodas, Panos Stinis

Abstract

As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network leads to higher mutual information between layers. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights, providing insight into why neural networks with far more weights than training points can be reliably trained.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 82 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 82 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 19 23%
Researcher 17 21%
Student > Master 10 12%
Student > Bachelor 6 7%
Other 5 6%
Other 10 12%
Unknown 15 18%
Readers by discipline Count As %
Computer Science 20 24%
Physics and Astronomy 10 12%
Engineering 7 9%
Chemistry 5 6%
Psychology 4 5%
Other 14 17%
Unknown 22 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 13 June 2022.
All research outputs
#6,109,012
of 22,663,969 outputs
Outputs from Frontiers in Psychology
#8,774
of 29,357 outputs
Outputs of similar age
#106,925
of 325,718 outputs
Outputs of similar age from Frontiers in Psychology
#318
of 722 outputs
Altmetric has tracked 22,663,969 research outputs across all sources so far. This one has received more attention than most of these and is in the 72nd percentile.
So far Altmetric has tracked 29,357 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one has gotten more attention than average, scoring higher than 69% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 325,718 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 66% of its contemporaries.
We're also able to compare this research output to 722 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 55% of its contemporaries.