INVESTIGATING THE COMPUTATIONAL PRINCIPLES OF NEURAL DYNAMICS USING DEEP LEARNING TECHNIQUES
Loading...
Can’t use the file because of accessibility barriers? Contact us with the title of the item, permanent link, and specifics of your accommodation need.
Date
2023-08
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
[Bloomington, Ind.] : Indiana University
Permanent Link
Abstract
One of the main goals of computational neuroscience is to explain why and how certain dynamics emerge in neural circuits. A promising approach to address this question is to train deep neural networks (DNNs) on ecologically relevant tasks and analyze their dynamics to see if they resemble the dynamics of their biological counterparts; and if they do, take this resemblance as supporting evidence that the modeled circuit is solving the same task. This approach allows us to understand why (for what task) and how (under what learning rules) specific dynamics arise.
We have utilized this approach to test the proposed task-dynamics relationships in two of the most commonly studied systems in the brain, namely, the visual system and the memory system. In the
visual system (chapters 1&2), we first performed a deep review of the two visual streams hypothesis (TVSH) and then tested the dorsal amnesia sub-hypothesis of this theory. According to this sub-hypothesis, the ventral visual stream has a longer memory compared to the dorsal stream, which stems from the tasks that this stream performs. To test this hypothesis, we trained identical networks to perform either dorsal stream or ventral stream tasks. We found that a DNN trained on orientation classification (a dorsal pathway task) develops a shorter memory while the same DNN trained on object classification (as a ventral pathway task) develops a longer memory,
corroborating the dorsal amnesia sub-hypothesis.
In the memory system (chapters 3&4), we first analyzed the predictive coding theory for the brain and argued that based on this theory, the entorhinal-hippocampal circuit must be tasked with the prediction of the inputs, and an autoencoder architecture is a reasonable first approximation for this circuit. We showed that a sparse autoencoder network that is tasked to reconstruct its inputs develops scale-invariant place cells, corroborating the hypothesis that hippocampal dynamics may emerge as a result of an input prediction/reconstruction and supporting the validity of the deep learning framework for neuroscience. Additionally, our results showed that it is crucial to
investigate the constraints that give rise to the behavior of DNNs and neural circuits because it can explain why biology has chosen one solution among all other solutions for a specific task.
In summary, we showed that the deep learning framework for neuroscience can offer valuable insights into understanding the computational principles of neural dynamics when informed by theory and biology.
Description
Thesis (Ph.D.) - Indiana University, Department of Psychological and Brain Sciences and the Program in Neuroscience, 2023
Keywords
Deep learning, hippocampus, predictive coding, sparse autoencoder, two visual streams hypothesis, scale invariant place cells
Citation
Journal
DOI
Link(s) to data and video for this item
Relation
Rights
This work is under a CC-BY-NC-SA license. You are free to copy and redistribute the material in any format as well as remix, transform, and build upon the material as long as you give appropriate credit to the original creator, provide a link to the license, and indicate any changes made. You may not use this work for commercial purpose and must distribute any contributions under an identical license.
https://creativecommons.org/licenses/by-nc-sa/3.0/
https://creativecommons.org/licenses/by-nc-sa/3.0/
Type
Doctoral Dissertation