Title |
Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory
|
---|---|
Published in |
Frontiers in Computational Neuroscience, July 2018
|
DOI | 10.3389/fncom.2018.00050 |
Pubmed ID | |
Authors |
Marco Martinolli, Wulfram Gerstner, Aditya Gilra |
Abstract |
The interplay of reinforcement learning and memory is at the core of several recent neural network models, such as the Attention-Gated MEmory Tagging (AuGMEnT) model. While successful at various animal learning tasks, we find that the AuGMEnT network is unable to cope with some hierarchical tasks, where higher-level stimuli have to be maintained over a long time, while lower-level stimuli need to be remembered and forgotten over a shorter timescale. To overcome this limitation, we introduce a hybrid AuGMEnT, with leaky (or short-timescale) and non-leaky (or long-timescale) memory units, that allows the exchange of low-level information while maintaining high-level one. We test the performance of the hybrid AuGMEnT network on two cognitive reference tasks, sequence prediction and 12AX. |
X Demographics
Geographical breakdown
Country | Count | As % |
---|---|---|
United States | 3 | 14% |
Japan | 2 | 9% |
Switzerland | 1 | 5% |
United Kingdom | 1 | 5% |
Norway | 1 | 5% |
Unknown | 14 | 64% |
Demographic breakdown
Type | Count | As % |
---|---|---|
Members of the public | 19 | 86% |
Scientists | 3 | 14% |
Mendeley readers
Geographical breakdown
Country | Count | As % |
---|---|---|
Unknown | 37 | 100% |
Demographic breakdown
Readers by professional status | Count | As % |
---|---|---|
Student > Master | 9 | 24% |
Student > Ph. D. Student | 6 | 16% |
Student > Doctoral Student | 4 | 11% |
Student > Bachelor | 4 | 11% |
Researcher | 3 | 8% |
Other | 2 | 5% |
Unknown | 9 | 24% |
Readers by discipline | Count | As % |
---|---|---|
Computer Science | 14 | 38% |
Neuroscience | 5 | 14% |
Psychology | 4 | 11% |
Physics and Astronomy | 2 | 5% |
Linguistics | 1 | 3% |
Other | 3 | 8% |
Unknown | 8 | 22% |