Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/takyamamoto/BNN-ANN-papers

Papers : Biological and Artificial Neural Networks
https://github.com/takyamamoto/BNN-ANN-papers

List: BNN-ANN-papers

artificial-neural-networks awesome-list biological-neural-networks neurosci

Last synced: about 2 months ago
JSON representation

Papers : Biological and Artificial Neural Networks

Awesome Lists containing this project

README

        

# Papers : Biological and Artificial Neural Networks
I have collected the papers of **Artificial Neural Networks** which related to **Neuroscience** (especially Computational Neuroscience). If there are papers which is not listed, I would appreciate if you could tell me from **Issue**.

## Artificial neural networks and computational neuroscience
#### Survey
- D. Cox, T. Dean. "Neural networks and neuroscience-inspired computer vision". *Curr. Biol.* **24**(18) 921-929 (2014). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0960982214010392?via%3Dihub))
- A. Marblestone, G. Wayne, K. Kording. "Toward an integration of deep learning and neuroscience". (2016). ([arXiv](https://arXiv.org/abs/1606.03813))
- O. Barak. "Recurrent neural networks as versatile tools of neuroscience research". *Curr. Opin. Neurobiol.* (2017). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0959438817300429?via%3Dihub))
- D. Silva, P. Cruz, A. Gutierrez. "Are the long-short term memory and convolution neural net biological system?". *KICS.* **4**(2), 100-106 (2018). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S2405959518300249))
- N. Kriegeskorte, P. Douglas. "Cognitive computational neuroscience". *Nat. Neurosci.* **21**(9), 1148-1160 (2018). ([arXiv](https://arXiv.org/abs/1807.11819))
- N. Kriegeskorte, T. Golan. "Neural network models and deep learning - a primer for biologists". (2019). ([arXiv](https://arxiv.org/abs/1902.04704))
- K.R. Storrs, N. Kriegeskorte. "Deep Learning for Cognitive Neuroscience". (2019). ([arXiv](https://arxiv.org/abs/1903.01458))
- T.C. Kietzmann, P. McClure, N. Kriegeskorte. "Deep Neural Networks in Computational Neuroscience". *Oxford Research Encyclopaedia of Neuroscience*. (2019). ([Oxford](https://oxfordre.com/neuroscience/view/10.1093/acrefore/9780190264086.001.0001/acrefore-9780190264086-e-46), [bioRxiv](https://www.biorxiv.org/content/10.1101/133504v2)))
- J.S. Bowers. "Parallel Distributed Processing Theory in the Age of Deep Networks". *Trends. Cogn. Sci.* (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S1364661317302164?via%3Dihub))
- R.M. Cichy, D. Kaiser. "Deep Neural Networks as Scientific Models". *Trends. Cogn. Sci.* (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S1364661319300348?via%3Dihub))
- S. Musall, A.E. Urai, D. Sussillo, A.K. Churchland. "Harnessing behavioral diversity to understand neural computations for cognition". *Curr. Opin. Neurobiol.* (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0959438819300285))
- B.A. Richards, T.P. Lillicrap, et al. "A deep learning framework for neuroscience". *Nat. Neurosci.* (2019). ([Nat. Neurosci.](https://www.nature.com/articles/s41593-019-0520-2))
- U. Hasson, S.A. Nastase, A. Goldstein. "Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks". *Neuron*. (2020). ([Neuron](https://www.cell.com/neuron/fulltext/S0896-62731931044-X))
- A. Saxe, S. Nelli, C. Summerfield. "If deep learning is the answer, then what is the question?". (2020). ([arXiv](https://arxiv.org/abs/2004.07580))

#### Issue
- T.P. Lillicrap, K.P. Kording. "What does it mean to understand a neural network?". (2019). ([arXiv](https://arxiv.org/abs/1907.06374))

### Analysis methods for neural networks
Methods for understanding of neural representation of ANN.
#### Survey
- D. Barrett, A. Morcos, J. Macke. "Analyzing biological and artificial neural networks: challenges with opportunities for synergy?". (2018). ([arXiv](https://arXiv.org/abs/1810.13373))

#### Neuron Feature
- I. Rafegas, M. Vanrell, L.A. Alexandre. "Understanding trained CNNs by indexing neuron selectivity". (2017). ([arXiv](https://arxiv.org/abs/1702.00382))
- A. Nguyen, J. Yosinski, J. Clune. "Understanding Neural Networks via Feature Visualization: A survey". (2019). ([arXiv](https://arxiv.org/abs/1904.08939))

#### Comparing the representations of neural networks with those of the Brains

##### Representational similarity analysis (RSA)
- N. Kriegeskorte, J. Diedrichsen. "Peeling the Onion of Brain Representations". *Annu. Rev. Neurosci*. (2019). ([Annu Rev Neurosci](https://www.annualreviews.org/doi/10.1146/annurev-neuro-080317-061906))

##### Canonical correlation analysis (CCA)
- M. Raghu, J. Gilmer, J. Yosinski, J. Sohl-Dickstein. "SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability". *NIPS.* (2017). ([arXiv](https://arXiv.org/abs/1706.05806))
- H. Wang, et al. "Finding the needle in high-dimensional haystack: A tutorial on canonical correlation analysis". (2018). ([arXiv](https://arxiv.org/abs/1812.02598))

##### Centered kernel alignment (CKA)
- S. Kornblith, M. Norouzi, H. Lee, G. Hinton. "Similarity of Neural Network Representations Revisited". (2019). ([arXiv](https://arxiv.org/abs/1905.00414v1))

##### Representational stability analysis (ReStA)
- S. Abnar, L. Beinborn, R. Choenni, W. Zuidema. "Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains". (2019). ([arXiv](https://arxiv.org/abs/1906.01539))

#### Fixed point analysis for RNN
- M.B. Ottaway, P.Y. Simard, D.H. Ballard. "Fixed point analysis for recurrent networks". *NIPS.* (1989). ([pdf](https://papers.nips.cc/paper/181-fixed-point-analysis-for-recurrent-networks.pdf))
- D. Sussillo, O. Barak. "Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks". *Neural Comput.* **25**(3), 626-649 (2013). ([MIT Press](https://www.mitpressjournals.org/doi/full/10.1162/NECO_a_00409?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed), [Jupyter notebook](https://github.com/google-research/computation-thru-dynamics/blob/master/notebooks/Fixed%20Point%20Finder%20Tutorial.ipynb))
- M.D. Golub, D. Sussillo. "FixedPointFinder: A Tensorflow toolbox for identifying and characterizing fixed points in recurrent neural networks". *JOSS.* (2018). ([pdf](https://web.stanford.edu/~mgolub/publications/GolubJOSS2018.pdf), [GitHub](https://github.com/mattgolub/fixed-point-finder))
- G.E. Katz, J.A. Reggia. "Using Directional Fibers to Locate Fixed Points of Recurrent Neural Networks". *IEEE.* (2018). ([IEEE](https://ieeexplore.ieee.org/document/8016349))

#### Ablation analysis
- A.S. Morcos, D.G.T. Barrett, N.C. Rabinowitz, M. Botvinick. "On the importance of single directions for generalization". *ICLR.* (2018). ([arXiv](https://arxiv.org/abs/1803.06959))

### Computational psychiatry
I haven't been able to completely survey papers in this field.
- R.E. Hoffman, U. Grasemann, R. Gueorguieva, D. Quinlan, D. Lane, R. Miikkulainen. "Using computational patients to evaluate illness mechanisms in schizophrenia". *Biol. Psychiatry.* **69**(10), 997–1005 (2011). ([PMC](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3105006/))

## Deep neural network as models of the Brain
Understanding the neural representation of the brain is difficult. Neural networks learn specific tasks (or be optimized for a specific loss function), and (sometimes) can get the same representation as the brain. Then, we can indirectly know the purpose of neural representation in the brain.

### Survey
- A.J.E. Kell, J.H. McDermott. "Deep neural network models of sensory systems: windows onto the role of task constraints". *Curr. Opin. Neurobiol.* (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0959438818302034))

### Cortical neuron
- P. Poirazi, T. Brannon, B.W Mel. "Pyramidal Neuron as Two-Layer Neural Network". *Neuron*. **37**(6). (2003). ([Neuron](https://www.cell.com/neuron/fulltext/S0896-62730300149-1))
- B. David, S. Idan, L. Michael. "Single Cortical Neurons as Deep Artificial Neural Networks". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/613141v1))

### Vision
- D. Zipser, R.A. Andersen. "A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons". *Nature.* **331**, 679–684 (1988). ([Nature.](https://www.nature.com/articles/331679a0))
- A. Krizhevsky, I. Sutskever, G. Hinton. "ImageNet classification with deep convolutional neural networks". *NIPS* (2012). ([pdf](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf))
- (cf.) I. Goodfellow, Y. Bengio, A. Courville. "[Deep Learning](https://www.deeplearningbook.org/)". MIT Press. (2016) : Chapter 9.10 "The Neuroscientific Basis for ConvolutionalNetworks"
- D. Yamins, et al. "Performance-optimized hierarchical models predict neural responses in higher visual cortex". *PNAS.* **111**(23) 8619-8624 (2014). ([PNAS](https://www.pnas.org/content/111/23/8619))
- S. Khaligh-Razavi, N. Kriegeskorte. "Deep supervised, but not unsupervised, models may explain IT cortical representation". *PLoS Comput. Biol*. **10**(11), (2014). ([PLOS](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003915))
- U. Güçlü, M.A.J. van Gerven. "Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream". *J. Neurosci.* **35**(27), (2015). ([J. Neurosci.](http://www.jneurosci.org/content/35/27/10005))
- D. Yamins, J. DiCarlo. "Eight open questions in the computational modeling of higher sensory cortex". *Curr. Opin. Neurobiol.* **37**, 114–120 (2016). ([sciencedirect](https://www.sciencedirect.com/science/article/abs/pii/S0959438816300022))
- K.M. Jozwik, N. Kriegeskorte, K.R. Storrs, M. Mur. "Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments". *Front. Psychol*. (2017). ([Front. Psychol](https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01726/full))
- M.N.U. Laskar, L.G.S. Giraldo, O. Schwartz. "Correspondence of Deep Neural Networks and the Brain for Visual Textures". (2018). ([arXiv](https://arxiv.org/abs/1806.02888))
- I. Kuzovkin, et al. "Activations of Deep Convolutional Neural Network are Aligned with Gamma Band Activity of Human Visual Cortex". *Commun. Biol.* **1** (2018). ([Commun. Biol.](https://www.nature.com/articles/s42003-018-0110-y))
- M. Schrimpf, et al. "Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?". (2018). ([bioRxiv](https://www.biorxiv.org/content/early/2018/09/05/407007))
- E. Kim, D. Hannan, G. Kenyon. "Deep Sparse Coding for Invariant Multimodal Halle Berry Neurons". *CVPR.* (2018). ([arXiv](https://arXiv.org/abs/1711.07998))
- S. Ocko, J. Lindsey, S. Ganguli, S. Deny. "The emergence of multiple retinal cell types through efficient coding of natural movies". (2018). ([bioRxiv](https://www.biorxiv.org/content/early/2018/10/31/458737))
- Q. Yan, et al. "Revealing Fine Structures of the Retinal Receptive Field by Deep Learning Networks". (2018). ([arXiv](https://arXiv.org/abs/1811.02290))
- H. Wen, J. Shi, W. Chen, Z. Liu. "Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization". *Sci.Rep.* (2018). ([Sci.Rep.](https://www.nature.com/articles/s41598-018-22160-9))
- J. Lindsey, S. Ocko, S. Ganguli, S. Deny. "A Unified Theory of Early Visual Representations from Retina to Cortex through Anatomically Constrained Deep CNNs". (2019). ([arXiv](https://arXiv.org/abs/1901.00945))
- I. Fruend. "Simple, biologically informed models, but not convolutional neural networks describe target detection in naturalistic images". *bioRxiv* (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/578633v1))
- A. Doerig, et al. "Capsule Networks but not Classic CNNs Explain Global Visual Processing". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/747394v1))
- A.S. Benjamin, et al. "Hue tuning curves in V4 change with visual context". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/780478v1))
- S. Baek, M. Song, J. Jang, et al. "Spontaneous generation of face recognition in untrained deep neural networks". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/857466v1))

#### Recurrent networks for object recognition
- C. J. Spoerer, P. McClure, N. Kriegeskorte. "Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition". *Front. Psychol.* (2017). ([Front. Psychol](https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01551/full))
- A. Nayebi, D. Bear, J. Kubilius, K. Kar, S. Ganguli, D. Sussillo, J. DiCarlo, D. Yamins. "Task-Driven Convolutional Recurrent Models of the Visual System". (2018). ([arXiv](https://arXiv.org/abs/1807.00053), [GitHub](https://github.com/neuroailab/tnn))
- T.C. Kietzmann, et al. "Recurrence required to capture the dynamic computations of the human ventral visual stream". (2019). ([arXiv](https://arxiv.org/abs/1903.05946))
- K. Qiao. et al. "Category decoding of visual stimuli from human brain activity using a bidirectional recurrent neural network to simulate bidirectional information flows in human visual cortices". (2019). ([arXiv](https://arxiv.org/abs/1903.07783))
- K. Kar, J. Kubilius, K. Schmidt, E.B. Issa, J.J. DiCarlo . "Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior". *Nat. Neurosci.* (2019). ([Nat. Neurosci.](https://www.nature.com/articles/s41593-019-0392-5), [bioRxiv](https://www.biorxiv.org/content/10.1101/354753v1))
- T.C. Kietzmann, C.J. Spoerer, L.K.A. Sörensen, R.M. Cichy, O.Hauk, N. Kriegeskorte, "Recurrence is required to capture the representational dynamics of the human visual system". *PNAS.* (2019). ([PNAS](https://www.pnas.org/content/early/2019/10/04/1905544116))

#### Primary visual cortex (V1)
- S.A. Cadena, et al. "Deep convolutional models improve predictions of macaque V1 responses to natural images". *PLOS Comput. Biol.* (2019). ([PLOS](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006897), [bioRxiv](https://www.biorxiv.org/content/10.1101/201764v2))
- A.S. Ecker, et al. "A rotation-equivariant convolutional neural network model of primary visual cortex". *ICLR* (2019). ([OpenReview](https://openreview.net/forum?id=H1fU8iAqKX), [arXiv](https://arxiv.org/abs/1809.10504))

#### Visual illusion
Also see the papers associated with [PredNet](#PredNet-Predictive-coding-network).
- E.J. Ward. "Exploring Perceptual Illusions in Deep Neural Networks". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/687905v1))
- E.D. Sun, R. Dekel."ImageNet-trained deep neural network exhibits illusion-like response to the Scintillating Grid". (2019). ([arXiv](https://arxiv.org/abs/1907.09019))

#### Recursive Cortical Network (RCN; non NN model)
- D. George, et al. "A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs". *Science* (2017). ([Science](http://science.sciencemag.org/content/358/6368/eaag2612.full), [GitHub](https://github.com/vicariousinc/science_rcn))

#### Weight shared ResNet as RNN for object recognition
- Q. Liao, T. Poggio. "Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex". (2016). ([arXiv](https://arXiv.org/abs/1604.03640))

#### Generating visual super stimuli
- J. Ukita, T. Yoshida, K. Ohki. "Characterisation of nonlinear receptive fields of visual neurons by convolutional neural network". *Sci.Rep.* (2019). ([Sci.Rep.](https://www.nature.com/articles/s41598-019-40535-4))
- C.R. Ponce, et al. "Evolving super stimuli for real neurons using deep generative networks". *Cell*. **177**, 999–1009 (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/516484v1), [Cell](https://www.cell.com/cell/fulltext/S0092-86741930391-5))
- P. Bashivan, K. Kar, J.J DiCarlo. "Neural Population Control via Deep Image Synthesis". *Science.* (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/461525v1), [Science](https://science.sciencemag.org/content/364/6439/eaav9436), [GitHub1](https://github.com/dicarlolab/npc), [GitHub2](https://github.com/dicarlolab/retinawarp))
- A.P. Batista. K.P. Kording. "A Deep Dive to Illuminate V4 Neurons". *Trends. Cogn. Sci.* (2019). ([Trends. Cogn. Sci.](https://www.cell.com/trends/neurosciences/fulltext/S0166-22361930111-0))

#### Visual number sense
- K. Nasr, P. Viswanathan, A. Nieder. "Number detectors spontaneously emerge in a deep neural network designed for visual object recognition". *Sci. Adv.* (2019). ([Sci. Adv.](https://advances.sciencemag.org/content/5/5/eaav7903))

### Auditory cortex
- U. Güçlü, J. Thielen, M. Hanke, M. van Gerven. "Brains on Beats". *NIPS* (2016) ([arXiv](https://arxiv.org/abs/1606.02627))
- A.J.E. Kell,D.L.K. Yamins,E.N. Shook, S.V. Norman-Haignere, J.H.McDermott. "A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy". *Neuron* **98**(3), (2018) ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0896627318302502?via%3Dihub))
- T. Koumura, H. Terashima, S. Furukawa. "Cascaded Tuning to Amplitude Modulation for Natural Sound Recognition". *J. Neurosci.* **39**(28), 5517-5533 (2019). ([J. Neurosci.](https://www.jneurosci.org/content/39/28/5517), [bioRxiv](https://www.biorxiv.org/content/10.1101/308999v2), [GitHub](https://github.com/cycentum/cascaded-am-tuning-for-sound-recognition))

### Motor cortex
- D. Sussillo, M. Churchland, M. Kaufman, K. Shenoy. "A neural network that finds a naturalistic solution for the production of muscle activity". *Nat. Neurosci.* **18**(7), 1025–1033 (2015). ([PubMed](https://www.ncbi.nlm.nih.gov/pubmed/26075643))
- J.A. Michaels, et al. "A neural network model of flexible grasp movement generation". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/742189v1))
- J. Merel, M. Botvinick, G. Wayne. "Hierarchical motor control in mammals and machines". *Nat. Commun.* (2019). ([Nat.Commun.](https://www.nature.com/articles/s41467-019-13239-6))

### Spatial coding (Place cells, Grid cells, Head direction cells)
- C. Cueva, X. Wei. "Emergence of grid-like representations by training recurrent neural networks to perform spatial localization". *ICLR.* (2018). ([arXiv](https://arXiv.org/abs/1803.07770))
- A. Banino, et al. "Vector-based navigation using grid-like representations in artificial agents". *Nature.* **557**(7705), 429–433 (2018). ([pdf](https://deepmind.com/documents/201/Vector-based%20Navigation%20using%20Grid-like%20Representations%20in%20Artificial%20Agents.pdf), [GitHub](https://github.com/deepmind/grid-cells))
- J.C.R. Whittington. et al. "Generalisation of structural knowledge in the hippocampal-entorhinal system". *NIPS.* (2018). ([arXiv](https://arxiv.org/abs/1805.09042))
- C.J. Cueva, P.Y. Wang, M. Chin, X. Wei. "Emergence of functional and structural properties of the head direction system by optimization of recurrent neural networks". (2020). ([arXiv](https://arxiv.org/abs/1912.10189))

### Rodent barrel cortex
- C. Zhuang, J. Kubilius, M. Hartmann, D. Yamins. "Toward Goal-Driven Neural Network Models for the Rodent Whisker-Trigeminal System". *NIPS.* (2017). ([arXiv](https://arxiv.org/abs/1706.07555))

### Convergent Temperature Representations
- M. Haesemeyer, A. Schier, F. Engert. "Convergent temperature representations in artificial and biological neural networks". *Neuron*. (2019). ([bioRxiv](https://www.biorxiv.org/content/early/2018/08/29/390435)), ([Neuron](https://www.cell.com/neuron/fulltext/S0896-62731930601-4))

### Cognitive task
- H.F. Song, G.R. Yang, X.J. Wang. "Reward-based training of recurrent neural networks for cognitive and value-based tasks". *eLife*. **6** (2017). ([eLife](https://elifesciences.org/articles/21492))
- G.R. Yang, M.R. Joglekar, H.F. Song, W.T. Newsome, X.J. Wang. "Task representations in neural networks trained to perform many cognitive tasks". *Nat. Neurosci.* (2019). ([Nat. Neurosci.](https://www.nature.com/articles/s41593-018-0310-2)) ([GitHub](https://github.com/gyyang/multitask))

### Time perception
- N.F. Hardy, V. Goudar, J.L. Romero-Sosa, D.V. Buonomano. "A model of temporal scaling correctly predicts that motor timing improves with speed". *Nat. Commun.* **9** (2018). ([Nat. Commun.](https://www.nature.com/articles/s41467-018-07161-6))
- J. Wang, D. Narain, E.A. Hosseini, M. Jazayeri. "Flexible timing by temporal scaling of cortical responses". *Nat. Neurosci.* **21** 102–110(2018). ([Nat. Neurosci.](https://www.nature.com/articles/s41593-017-0028-6))
- W. Roseboom, Z. Fountas, K. Nikiforou, D. Bhowmik, M. Shanahan, A. K. Seth. "Activity in perceptual classification networks as a basis for human subjective time perception". *Nat. Commun.* **10** (2019). ([Nat. Commun.](https://www.nature.com/articles/s41467-018-08194-7))
- B. Deverett, et al. "Interval timing in deep reinforcement learning agents". *NeurIPS 2019*. (2019). ([arXiv](https://arxiv.org/abs/1905.13469v2))
- Z. Bi, C. Zhou. "Time representation in neural network models trained to perform interval timing tasks". (2019). ([arXiv](https://arxiv.org/abs/1910.05546)).

### Short-term memory task
- K. Rajan, C.D.Harvey, D.W.Tank. "Recurrent Network Models of Sequence Generation and Memory". *Neuron.* **90**(1), 128-142 (2016). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0896627316001021?via%3Dihub))
- A.E. Orhan, W.J. Ma. " A diverse range of factors affect the nature of neural representations underlying short-term memory". *Nat. Neurosci.* (2019). ([Nat. Neurosci.](https://www.nature.com/articles/s41593-018-0314-y)), ([bioRxiv](https://www.biorxiv.org/content/10.1101/244707v3)), ([GitHub](https://github.com/eminorhan/recurrent-memory))
- N.Y. Masse. et al. "Circuit mechanisms for the maintenance and manipulation of information in working memory". *Nat. Neurosci.* (2019). ([Nat. Neurosci.](https://www.nature.com/articles/s41593-019-0414-3)), ([bioRxiv](https://www.biorxiv.org/content/10.1101/305714v2))

### Language
- J. Chiang, et al. "Neural and computational mechanisms of analogical reasoning". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/596726v1))
- S. Na, Y.J. Choe, D. Lee, G. Kim. "Discovery of Natural Language Concepts in Individual Units of CNNs". *ICLR.* (2019). ([OpenReview](https://openreview.net/forum?id=S1EERs09YQ)), ([arXiv](https://arxiv.org/abs/1902.07249))

#### Language learning
- B.M. Lake, T. Linzen, M. Baroni. "Human few-shot learning of compositional instructions". (2019). ([arXiv](https://arxiv.org/abs/1901.04587))
- A. Alamia, V. Gauducheau, D. Paisios, R. VanRullen. "Which Neural Network Architecture matches Human Behavior in Artificial Grammar Learning?". (2019). ([arXiv](https://arxiv.org/abs/1902.04861))

## Neural network architecture based on neuroscience
### Survey
- D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick. "Neuroscience-Inspired Artificial Intelligence". *Neuron.* **95**(2), 245-258 (2017).
([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0896627317305093))

### PredNet (Deep predictive coding network)
- W. Lotter, G. Kreiman, D. Cox. "Deep predictive coding networks for video prediction and unsupervised learning". *ICLR.* (2017). ([arXiv](https://arXiv.org/abs/1605.08104), [GitHub](https://coxlab.github.io/prednet/))
- E. Watanabe, A. Kitaoka, K. Sakamoto, M. Yasugi, K. Tanaka. "Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction". *Front. Psychol.* (2018). ([Front. Psychol.](https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00345/full))
- M. Fonseca. "Unsupervised predictive coding models may explain visual brain representation". (2019). ([arXiv](https://arxiv.org/abs/1907.00441), [GitHub](https://github.com/thefonseca/algonauts))
- W. Lotter, G. Kreiman, D. Cox. "A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception". *Nat. Machine Intelligence*. (2020). ([arXiv](https://arXiv.org/abs/1805.10734), [Nat. Machine Intelligence](https://www.nature.com/articles/s42256-020-0170-9))

#### subLSTM
- R. Costa, Y. Assael, B. Shillingford, N. Freitas, T. Vogels. "Cortical microcircuits as gated-recurrent neural networks". *NIPS.* (2017). ([arXiv](https://arXiv.org/abs/1711.02448))

#### Activation functions
- G.S. Bhumbra. "Deep learning improved by biological activation functions". (2018). ([arXiv](https://arxiv.org/abs/1804.11237))

#### Normalization
- L. Gonzalo, S. Giraldo, O. Schwartz. "Integrating Flexible Normalization into Mid-Level Representations of Deep Convolutional Neural Networks". (2018). ([arXiv](https://arxiv.org/abs/1806.01823))
- M.F. Günthner, et al. "Learning Divisive Normalization in Primary Visual Cortex". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/767285v1))

## Reinforcement Learning
I haven't been able to completely survey papers in this field.
- N. Haber, D. Mrowca, L. Fei-Fei, D. Yamins. "Learning to Play with Intrinsically-Motivated Self-Aware Agents". *NIPS.* (2018). ([arXiv](https://arxiv.org/abs/1802.07442))
- J. X. Wang, et al. "Prefrontal cortex as a meta-reinforcement learning system". *Nat. Neurosci.* (2018). ([Nat. Neurosci.](https://www.nature.com/articles/s41593-018-0147-8)), ([bioRxiv](https://www.biorxiv.org/content/10.1101/295964v2)), ([blog](https://deepmind.com/blog/prefrontal-cortex-meta-reinforcement-learning-system/))
- M. Botvinick. et al. "Reinforcement Learning, Fast and Slow". *Trends. Cogn. Sci.* (2019). ([Trends. Cogn. Sci.](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-66131930061-0))
- E.O. Neftci, B.B. Averbeck. "Reinforcement learning in artificial and biological systems". *Nat. Mach. Intell.* (2019). ([Nat. Mach. Intell.](https://www.nature.com/articles/s42256-019-0025-4))
- W. Dabney, Z. Kurth-Nelson, N. Uchida, C.K. Starkweather, D. Hassabis, R. Munos, & M. Botvinick. "A distributional code for value in dopamine-based reinforcement learning". *Nature*. (2020). ([Nature](https://www.nature.com/articles/s41586-019-1924-6 )). ([blog](https://deepmind.com/blog/article/Dopamine-and-temporal-difference-learning-A-fruitful-relationship-between-neuroscience-and-AI))

## Learning and development

### Biologically plausible learning algorithms
#### Survey
- J. Whittington, R. Bogacz. "Theories of Error Back-Propagation in the Brain". *Trends. Cogn. Sci.* (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S1364661319300129?via%3Dihub))
- T.P. Lillicrap, A.Santoro. "Backpropagation through time and the brain". *Curr. Opin. Neurobiol.* (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0959438818302009))
- T.P. Lillicrap, A. Santoro, L. Marris, et al. "Backpropagation and the brain". *Nat. Rev. Neurosci*. (2020). ([Nat. Rev. Neurosci.](https://www.nature.com/articles/s41583-020-0277-3))

#### Equilibrium Propagation
- Y. Bengio, D. Lee, J. Bornschein, T. Mesnard, Z. Lin. "Towards Biologically Plausible Deep Learning". (2015). ([arXiv](https://arXiv.org/abs/1502.04156))
- B. Scellier, Y. Bengio. "Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation". *Front. Comput. Neurosci.* **11**(24), (2017). ([arXiv](https://arXiv.org/abs/1602.05179))
- J. Sacramento, R. P. Costa, Y. Bengio, W. Senn. "Dendritic cortical microcircuits approximate the backpropagation algorithm". *NIPS.* (2018). ([arXiv](https://arXiv.org/abs/1810.11393))

#### Feedback alignment
- T. Lillicrap, D. Cownden, D. Tweed, C. Akerman. "Random synaptic feedback weights support error backpropagation for deep learning". *Nat. Commun.* **7** (2016). ([Nat. Commun.](https://www.nature.com/articles/ncomms13276))
- A. Nøkland. "Direct Feedback Alignment Provides Learning in Deep Neural Networks". (2016). ([arXiv](https://arxiv.org/abs/1609.01596))
- M. Akrout, C. Wilson, P.C. Humphreys, T.Lillicrap, D. Tweed. "Deep Learning without Weight Transport". (2019). ([arXiv](https://arxiv.org/abs/1904.05391))
- B.J. Lansdell, P. Prakash, K.P. Kording. "Learning to solve the credit assignment problem". (2019). ([arXiv](https://arxiv.org/abs/1906.00889))

#### Local error signal
- H. Mostafa, V. Ramesh, G.Cauwenberghs. "Deep Supervised Learning Using Local Errors". *Front. Neurosci.* (2018). ([Front. Neurosci.](https://www.frontiersin.org/articles/10.3389/fnins.2018.00608/full)).
- A. Nøkland, L.H. Eidnes. "Training Neural Networks with Local Error Signals". (2019). ([arXiv](https://arXiv.org/abs/1901.06656)) ([GitHub](https://github.com/anokland/local-loss))

#### Others
- M. Jaderberg, et al. "Decoupled Neural Interfaces using Synthetic Gradients" (2016). ([arXiv](https://arxiv.org/abs/1608.05343))
- N. Ke, A. Goyal, O. Bilaniuk, J. Binas, M. Mozer, C. Pal, Y. Bengio. "Sparse Attentive Backtracking: Temporal CreditAssignment Through Reminding". *NIPS.* (2018). ([arXiv](https://arXiv.org/abs/1809.03702))
- S. Bartunov, A. Santoro, B. Richards, L. Marris, G. Hinton, T. Lillicrap. "Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures". *NIPS.* (2018). ([arXiv](https://arXiv.org/abs/1807.04587))
- R. Feldesh. "The Distributed Engram". (2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/583195v1))
- Y. Amit. "Deep Learning With Asymmetric Connections and Hebbian Updates". *Front. Comput. Neurosci.* (2019). ([Front. Comput. Neurosci.](https://www.frontiersin.org/articles/10.3389/fncom.2019.00018/full)). ([GitHub](https://github.com/yaliamit/URFB))
- T. Mesnard, G. Vignoud, J. Sacramento, W. Senn, Y. Bengio "Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks". (2019). ([arXiv](https://arxiv.org/abs/1911.08585))

#### Issue
- F. Crick. "The recent excitement about neural networks". *Nature*. **337**, 129–132 (1989). ([Nat.](https://www.nature.com/articles/337129a0))

### Learning dynamics of neural networks and brains
- J. Shen, M. D. Petkova, F. Liu, C. Tang. "Toward deciphering developmental patterning with deep neural network". (2018). ([bioRxiv](https://www.biorxiv.org/content/early/2018/08/09/374439))
- A.M. Saxe, J.L. McClelland, S. Ganguli. "A mathematical theory of semantic development in deep neural networks". *PNAS*. (2019). ([arXiv](https://arXiv.org/abs/1810.10531)). ([PNAS](https://www.pnas.org/content/early/2019/05/16/1820226116))
- D.V. Raman, A.P. Rotondo, T. O’Leary. "Fundamental bounds on learning performance in neural circuits". *PNAS*. (2019). ([PNAS](https://www.pnas.org/content/116/21/10537))
- R. C. Wilson, A. Shenhav, M. Straccia, J.D. Cohen. "The Eighty Five Percent Rule for optimal learning". *Nat. Commun.* (2019). ([Nat.Commun.](https://www.nature.com/articles/s41467-019-12552-4))

### Few shot Learning
- A. Cortese, B.D. Martino, M. Kawato. "The neural and cognitive architecture for learning from a small sample". *Curr. Opin. Neurobiol.* **55**, 133–141 (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0959438818301077))

### A Critique of Pure Learning
- A. Zador. "A Critique of Pure Learning: What Artificial Neural Networks can Learn from Animal Brains". *Nat. Commun.*(2019). ([bioRxiv](https://www.biorxiv.org/content/10.1101/582643v1)). ([Nat. Commun.](https://www.nature.com/articles/s41467-019-11786-6))

## Brain Decoding & Brain-machine interface
- E. Matsuo, I. Kobayashi, S. Nishimoto, S. Nishida, H. Asoh. "Generating Natural Language Descriptions for Semantic Representations of Human Brain Activity". *ACL SRW.* (2016). ([ACL Anthology](https://aclanthology.info/papers/P16-3004/p16-3004))
- Y. Güçlütürk, U. Güçlü, K. Seeliger, S.E.Bosch, R.J. van Lier, M.A.J. van Gerven. "Reconstructing perceived faces from brain activations with deep adversarial neural decoding". *NIPS* (2017). ([NIPS](https://papers.nips.cc/paper/7012-reconstructing-perceived-faces-from-brain-activations-with-deep-adversarial-neural-decoding))
- R. Rao. "Towards Neural Co-Processors for the Brain: Combining Decoding and Encoding in Brain-Computer Interfaces". (2018). ([arXiv](https://arxiv.org/abs/1811.11876))
- G. Shen, T. Horikawa, K. Majima, Y. Kamitani. "Deep image reconstruction from human brain activity". *PLOS* (2019). ([PLOS](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633))

## Others
- M.S. Goldman. "Memory without Feedback in a Neural Network". *Neuron* (2009). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0896627308010830?via%3Dihub))
- R. Yuste. "From the neuron doctrine to neural networks". *Nat. Rev. Neurosci.* 16, 487–497 (2015). ([Nat. Rev. Neurosci.](https://www.nature.com/articles/nrn3962))
- S. Saxena, J.P. Cunningham. "Towards the neural population doctrine". *Curr. Opin. Neurobiol.* (2019). ([sciencedirect](https://www.sciencedirect.com/science/article/pii/S0959438818300990))
- D.J. Heeger. "Theory of cortical function". *PNAS*. **114**(8), (2017). ([PNAS](https://www.pnas.org/content/114/8/1773))
- C.C. Chow, Y. Karimipanah. "Before and beyond the Wilson-Cowan equations". (2019). ([arXiv](https://arxiv.org/abs/1907.07821))