Theses and Dissertations
Permanent link for this collectionhttps://hdl.handle.net/2022/3086
Browse
Browsing Theses and Dissertations by Author "Alipour, Abolfazl"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item INVESTIGATING THE COMPUTATIONAL PRINCIPLES OF NEURAL DYNAMICS USING DEEP LEARNING TECHNIQUES([Bloomington, Ind.] : Indiana University, 2023-08) Alipour, Abolfazl; Tom James; John BeggsOne of the main goals of computational neuroscience is to explain why and how certain dynamics emerge in neural circuits. A promising approach to address this question is to train deep neural networks (DNNs) on ecologically relevant tasks and analyze their dynamics to see if they resemble the dynamics of their biological counterparts; and if they do, take this resemblance as supporting evidence that the modeled circuit is solving the same task. This approach allows us to understand why (for what task) and how (under what learning rules) specific dynamics arise. We have utilized this approach to test the proposed task-dynamics relationships in two of the most commonly studied systems in the brain, namely, the visual system and the memory system. In the visual system (chapters 1&2), we first performed a deep review of the two visual streams hypothesis (TVSH) and then tested the dorsal amnesia sub-hypothesis of this theory. According to this sub-hypothesis, the ventral visual stream has a longer memory compared to the dorsal stream, which stems from the tasks that this stream performs. To test this hypothesis, we trained identical networks to perform either dorsal stream or ventral stream tasks. We found that a DNN trained on orientation classification (a dorsal pathway task) develops a shorter memory while the same DNN trained on object classification (as a ventral pathway task) develops a longer memory, corroborating the dorsal amnesia sub-hypothesis. In the memory system (chapters 3&4), we first analyzed the predictive coding theory for the brain and argued that based on this theory, the entorhinal-hippocampal circuit must be tasked with the prediction of the inputs, and an autoencoder architecture is a reasonable first approximation for this circuit. We showed that a sparse autoencoder network that is tasked to reconstruct its inputs develops scale-invariant place cells, corroborating the hypothesis that hippocampal dynamics may emerge as a result of an input prediction/reconstruction and supporting the validity of the deep learning framework for neuroscience. Additionally, our results showed that it is crucial to investigate the constraints that give rise to the behavior of DNNs and neural circuits because it can explain why biology has chosen one solution among all other solutions for a specific task. In summary, we showed that the deep learning framework for neuroscience can offer valuable insights into understanding the computational principles of neural dynamics when informed by theory and biology.