https://github.com/msgi/nlp-journey
Documents, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation),etc.
https://github.com/msgi/nlp-journey
deep-learning paper
Last synced: 6 months ago
JSON representation
Documents, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation),etc.
- Host: GitHub
- URL: https://github.com/msgi/nlp-journey
- Owner: msgi
- License: apache-2.0
- Created: 2019-04-22T12:27:35.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2024-08-06T09:36:00.000Z (about 1 year ago)
- Last Synced: 2025-03-02T08:03:09.208Z (7 months ago)
- Topics: deep-learning, paper
- Homepage: https://github.com/msgi/nlp-journey
- Size: 3.91 KB
- Stars: 1,613
- Watchers: 62
- Forks: 378
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# nlp journey
[](https://github.com/msgi/nlp-journey/)
[](https://github.com/msgi/nlp-journey/fork)
[](https://github.com/msgi/nlp-journey/issues)
[](https://github.com/msgi/nlp-journey)## 1. Books
1. Handbook of Graphical Models. [`online`](https://stat.ethz.ch/~maathuis/papers/Handbook.pdf)
2. Deep Learning. [`online`](https://www.deeplearningbook.org/)
3. Neural Networks and Deep Learning. [`online`](http://neuralnetworksanddeeplearning.com/)
4. Speech and Language Processing. [`online`](http://web.stanford.edu/~jurafsky/slp3/ed3book.pdf)## 2. Papers
### 01) Transformer papers
1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [`paper`](https://arxiv.org/abs/1810.04805)
2. GPT-2: Language Models are Unsupervised Multitask Learners. [`paper`](https://blog.openai.com/better-language-models/)
3. Transformer-XL: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. [`paper`](https://arxiv.org/abs/1901.02860)
4. XLNet: Generalized Autoregressive Pretraining for Language Understanding. [`paper`](https://arxiv.org/abs/1906.08237)
5. RoBERTa: Robustly Optimized BERT Pretraining Approach. [`paper`](https://arxiv.org/abs/1907.11692)
6. DistilBERT: a distilled version of BERT: smaller, faster, cheaper and lighter. [`paper`](https://arxiv.org/abs/1910.01108)
7. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. [`paper`](https://arxiv.org/abs/1909.11942)
8. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. [`paper`](https://arxiv.org/abs/1910.10683)
9. ELECTRA: pre-training text encoders as discriminators rather than generators. [`paper`](https://openreview.net/pdf?id=r1xMH1BtvB)
10. GPT3: Language Models are Few-Shot Learners. [`paper`](https://arxiv.org/pdf/2005.14165.pdf)### 02) Models
1. LSTM(Long Short-term Memory). [`paper`](http://www.bioinf.jku.at/publications/older/2604.pdf)
2. Sequence to Sequence Learning with Neural Networks. [`paper`](https://arxiv.org/pdf/1409.3215.pdf)
3. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. [`paper`](https://arxiv.org/pdf/1406.1078.pdf)
4. Residual Network(Deep Residual Learning for Image Recognition). [`paper`](https://arxiv.org/pdf/1512.03385.pdf)
5. Dropout(Improving neural networks by preventing co-adaptation of feature detectors). [`paper`](https://arxiv.org/pdf/1207.0580.pdf)
6. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. [`paper`](https://arxiv.org/pdf/1502.03167.pdf)### 03) Summaries
1. An overview of gradient descent optimization algorithms. [`paper`](https://arxiv.org/pdf/1609.04747.pdf)
2. Analysis Methods in Neural Language Processing: A Survey. [`paper`](https://arxiv.org/pdf/1812.08951.pdf)
3. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. [`paper`](https://arxiv.org/pdf/1910.10683.pdf)
4. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. [`paper`](https://arxiv.org/pdf/2001.06937.pdf)
5. A Gentle Introduction to Deep Learning for Graphs. [`paper`](https://arxiv.org/pdf/1912.12693.pdf)
6. A Survey on Deep Learning for Named Entity Recognition. [`paper`](https://arxiv.org/pdf/1812.09449.pdf)
7. More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction. [`paper`](https://arxiv.org/pdf/2004.03186.pdf)
8. Deep Learning Based Text Classification: A Comprehensive Review. [`paper`](https://arxiv.org/pdf/2004.03705.pdf)
9. Pre-trained Models for Natural Language Processing: A Survey. [`paper`](https://arxiv.org/pdf/2003.08271.pdf)
10. A Survey on Contextual Embeddings. [`paper`](https://arxiv.org/pdf/2003.07278.pdf)
11. A Survey on Knowledge Graphs: Representation, Acquisition and Applications. [`paper`](https://arxiv.org/pdf/2002.00388.pdf)
12. Knowledge Graphs. [`paper`](https://arxiv.org/pdf/2003.02320v2.pdf)
13. Pre-trained Models for Natural Language Processing: A Survey. [`paper`](https://arxiv.org/pdf/2003.08271.pdf)### 04) Pre-training
1. A Neural Probabilistic Language Model. [`paper`](https://www.researchgate.net/publication/221618573_A_Neural_Probabilistic_Language_Model)
2. word2vec Parameter Learning Explained. [`paper`](https://arxiv.org/pdf/1411.2738.pdf)
3. Language Models are Unsupervised Multitask Learners. [`paper`](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
4. An Empirical Study of Smoothing Techniques for Language Modeling. [`paper`](https://dash.harvard.edu/bitstream/handle/1/25104739/tr-10-98.pdf?sequence=1)
5. Efficient Estimation of Word Representations in Vector Space. [`paper`](https://arxiv.org/pdf/1301.3781.pdf)
6. Distributed Representations of Sentences and Documents. [`paper`](https://arxiv.org/pdf/1405.4053.pdf)
7. Enriching Word Vectors with Subword Information(FastText). [`paper`](https://arxiv.org/pdf/1607.04606.pdf)
8. GloVe: Global Vectors for Word Representation. [`online`](https://nlp.stanford.edu/projects/glove/)
9. ELMo (Deep contextualized word representations). [`paper`](https://arxiv.org/pdf/1802.05365.pdf)
10. Pre-Training with Whole Word Masking for Chinese BERT. [`paper`](https://arxiv.org/pdf/1906.08101.pdf)### 05) Classification
1. Bag of Tricks for Efficient Text Classification (FastText). [`paper`](https://arxiv.org/pdf/1607.01759.pdf)
2. Convolutional Neural Networks for Sentence Classification. [`paper`](https://arxiv.org/pdf/1408.5882.pdf)
3. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. [`paper`](http://www.aclweb.org/anthology/P16-2034)### 06) Text generation
1. A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. [`paper`](https://arxiv.org/pdf/1805.06553.pdf)
2. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. [`paper`](https://arxiv.org/pdf/1609.05473.pdf)### 07) Text Similarity
1. Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. [`paper`](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.723.6492&rep=rep1&type=pdf)
2. Learning Text Similarity with Siamese Recurrent Networks. [`paper`](https://www.aclweb.org/anthology/W16-1617)
3. A Deep Architecture for Matching Short Texts. [`paper`](http://papers.nips.cc/paper/5019-a-deep-architecture-for-matching-short-texts.pdf)### 08) QA
1. A Question-Focused Multi-Factor Attention Network for Question Answering. [`paper`](https://arxiv.org/pdf/1801.08290.pdf)
2. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. [`paper`](https://arxiv.org/pdf/1812.08989.pdf)
3. A Knowledge-Grounded Neural Conversation Model. [`paper`](https://arxiv.org/pdf/1702.01932.pdf)
4. Neural Generative Question Answering. [`paper`](https://arxiv.org/pdf/1512.01337v1.pdf)
5. Sequential Matching Network A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots.[`paper`](https://arxiv.org/abs/1612.01627)
6. Modeling Multi-turn Conversation with Deep Utterance Aggregation.[`paper`](https://arxiv.org/pdf/1806.09102.pdf)
7. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network.[`paper`](https://www.aclweb.org/anthology/P18-1103)
8. Deep Reinforcement Learning For Modeling Chit-Chat Dialog With Discrete Attributes. [`paper`](https://arxiv.org/pdf/1907.02848.pdf)### 09) NMT
1. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. [`paper`](https://arxiv.org/pdf/1406.1078v3.pdf)
2. Neural Machine Translation by Jointly Learning to Align and Translate. [`paper`](https://arxiv.org/pdf/1409.0473.pdf)
3. Transformer (Attention Is All You Need). [`paper`](https://arxiv.org/pdf/1706.03762.pdf)### 10) Summary
1. Get To The Point: Summarization with Pointer-Generator Networks. [`paper`](https://arxiv.org/pdf/1704.04368.pdf)
2. Deep Recurrent Generative Decoder for Abstractive Text Summarization. [`paper`](https://aclweb.org/anthology/D17-1222)### 11) Relation extraction
1. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. [`paper`](https://www.aclweb.org/anthology/D15-1203)
2. Neural Relation Extraction with Multi-lingual Attention. [`paper`](https://www.aclweb.org/anthology/P17-1004)
3. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. [`paper`](https://aclweb.org/anthology/D18-1514)
4. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. [`paper`](https://www.aclweb.org/anthology/P16-1105)### 12) Large Language Models
1. Training language models to follow instructions with human feedback. [`paper`](https://arxiv.org/pdf/2203.02155.pdf)
2. LLaMA: Open and Efficient Foundation Language Models. [`paper`](https://arxiv.org/pdf/2302.13971.pdf)## 3. Articles
- TRANSFORMERS FROM SCRATCH. [`url`](http://peterbloem.nl/blog/transformers)
- The Illustrated Transformer.[`url`](https://jalammar.github.io/illustrated-transformer/)
- Attention-based-model. [`url`](http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/)
- Modern Deep Learning Techniques Applied to Natural Language Processing. [`url`](https://nlpoverview.com/)
- Illustrated Guide to LSTM’s and GRU’s: A step by step explanation
.[`url`](https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21)
- Applying word2vec to Recommenders and Advertising. [`url`](http://mccormickml.com/2018/06/15/applying-word2vec-to-recommenders-and-advertising/)## 4. Github
* CLUE. [`github`](https://github.com/CLUEbenchmark/CLUE)
* transformers. [`github`](https://github.com/huggingface/transformers)
* HanLP. [`github`](https://github.com/hankcs/HanLP)
* ML-For-Beginners. [`github`](https://github.com/microsoft/ML-For-Beginners.git)