Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Robofied/Awesome-NLP-Resources
This repository contains landmark research papers in Natural Language Processing that came out in this century.
https://github.com/Robofied/Awesome-NLP-Resources
List: Awesome-NLP-Resources
artificial-intelligence bert language language-models machine-learning natural-language-processing nlp papers vectors
Last synced: 1 day ago
JSON representation
This repository contains landmark research papers in Natural Language Processing that came out in this century.
- Host: GitHub
- URL: https://github.com/Robofied/Awesome-NLP-Resources
- Owner: Robofied
- License: mit
- Created: 2019-08-26T09:49:32.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2021-02-11T18:28:50.000Z (over 3 years ago)
- Last Synced: 2024-05-21T08:33:42.882Z (6 months ago)
- Topics: artificial-intelligence, bert, language, language-models, machine-learning, natural-language-processing, nlp, papers, vectors
- Homepage:
- Size: 16.2 MB
- Stars: 207
- Watchers: 14
- Forks: 55
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
- awesome-of-awesome-ml - Awesome-NLP-Resources (by Robofied)
- ultimate-awesome - Awesome-NLP-Resources - This repository contains landmark research papers in Natural Language Processing that came out in this century. (Other Lists / PowerShell Lists)
README
## Awesome NLP Resources
[![Awesome](https://awesome.re/badge.svg)](https://github.com/Robofied/Awesome-NLP-Resources)
[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/Akshat4112/Awesome-NLP-Resources/graphs/commit-activity) [![MIT license](https://img.shields.io/badge/License-MIT-blue.svg)](https://lbesson.mit-license.org/)This repository contains landmark research papers and blogs in Natural Language Processing that came out in this century.
## Contents
* [How to Read a Paper?](#How-to-Read-a-Paper?)
* [List of blogs](#List-of-blogs)
* [Machine Translation](#Machine-Translation)
* [Language Models](#Language-Models)
* [Image to Text](#Image-to-text)
* [Transformers](#Transformers)
* [List of Reasearch Papers](#List-of-Reasearch-Papers)
* [Machine Translation](#Machine-Translation)
* [Image to Text](#Image-to-text)
* [Transformers](#Transformers)## How to Read a Paper? :page_with_curl:
Reading a paper is not the same as reading a blogpost or a novel. Here are a few handy resources to help you get started.* [How to read an academic article](https://organizationsandmarkets.com/2010/08/31/how-to-read-an-academic-article/)
* [Advice on reading academic papers](https://www.cc.gatech.edu/~akmassey/posts/2012-02-15-advice-on-reading-academic-papers.html)
* [How to read and understand a scientific paper](https://violentmetaphors.com/2013/08/25/how-to-read-and-understand-a-scientific-paper-2/)
* [Should I Read Papers?](http://michaelrbernste.in/2014/10/21/should-i-read-papers.html)
* [The Refreshingly Rewarding Realm of Research Papers](https://www.youtube.com/watch?v=8eRx5Wo3xYA)## List of Research Papers
### Machine Translation
* [Sequence to Sequence Learning with Neural Network](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf) - LSTMN based approach for sequence problems.
* [Learning Phase Representations using RNN Encoder-Decoder for statistical Machine Translation](https://arxiv.org/pdf/1406.1078.pdf)
* [Attention Model(Neural Machine Translation By Jointly learning to Align and Translate)](https://arxiv.org/pdf/1409.0473.pdf) - Attention model architecture modified version for encoder decoder models (Don't confuse with [Attention is all you need paper](#Transformers) i.e, for transformers concept)
* [Understanding Back-Translation at Scale](https://arxiv.org/pdf/1808.09381.pdf)
* [MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning](https://arxiv.org/abs/1911.09483)
* [Scaling Neural Machine Translation](https://arxiv.org/abs/1806.00187)
* [The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation](https://arxiv.org/abs/1804.09849)
* [Convolutional Sequence to Sequence Learning](https://arxiv.org/abs/1705.03122)- Modified Attention model with convolutional layer### Language Models
* [Scalable Hierarchical Distributed Language Model](https://www.cs.toronto.edu/~amnih/papers/hlbl_final.pdf)
* [Bag of Tricks for Efficient Text Classification](https://arxiv.org/pdf/1607.01759.pdf) - fastText(by Facebook AI Research) trained on billion of words for text classification.
* [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
* [Hierarchical Probabilistic Neural Network Language Model](https://www.iro.umontreal.ca/~lisa/pointeurs/hierarchical-nnlm-aistats05.pdf) - Speed up training and recogintion(by 200) - Yoshua Bengio### Word Embeddings
* [Distributed Representations of Sentences and Documents](https://cs.stanford.edu/~quocle/paragraph_vector.pdf) - Sentence/Document to vectors by Tomas Mikolov by Google
* [Distributed Representations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf)- WOrd2Vec representation by Tomas Mikolov(Google)
* [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) - High quality vector representation from huge data sets by Tomas Mikolov(Google)
* [Deep contextualized word representations](https://arxiv.org/pdf/1802.05365.pdf)- based on deep birectional Language Model by Allen Institute for Artificial Intelligence
* [Enriching Word Vectors with Subword Information](https://arxiv.org/pdf/1607.04606.pdf) - Handles morphology and generates vectors for words not present in training dataset by Facebook AI Research
* [Misspelling Oblivious Word Embeddings](https://arxiv.org/abs/1905.09755)### Image to Text
* [Neural Image Caption Generation with Visual Attention](https://arxiv.org/pdf/1502.03044.pdf)
* [Deep Visual-Semantic Alignments for Generating Image Descriptions](https://cs.stanford.edu/people/karpathy/cvpr2015.pdf)### Transformers
* [Attention Is All You Need](https://arxiv.org/abs/1706.03762)
* [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805.pdf) - Multimodal Recurrent Neural Network architecture for image description by [Andrej Kaparthy ](http://karpathy.github.io/) and Le-Fei-fei## List of blogs
### Machine Translation
* [Google Machine Translation Blog](https://ai.googleblog.com/2016/09/a-neural-network-for-machine.html)
* [Email AutoReply and Auto Suggestion](https://ai.googleblog.com/2018/05/smart-compose-using-neural-networks-to.html)
* [Find Code errors and repair](https://medium.com/@martin.monperrus/sequence-to-sequence-learning-program-repair-e39dc5c0119b)### Image to Text
* [Image Captioning Using Keras](https://towardsdatascience.com/image-captioning-with-keras-teaching-computers-to-describe-pictures-c88a46a311b8)### Transformers
* [The Illustrated Transformer](http://jalammar.github.io/illustrated-transformer/) - Transformers Research paper core details explained by [Jalammar](http://jalammar.github.io/)
* [The Illustrated BERT](http://jalammar.github.io/illustrated-bert/) - BERT is explained by [Jalammar](http://jalammar.github.io/)
* [A Visual Guide to Using BERT for the First Time :boom:](http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/) - Very beautifully explained BERT architecture with the help of visuals.### [Back to Top](#Contents)