Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/ailln/nlp-roadmap

๐Ÿ—บ๏ธ ไธ€ไธช่‡ช็„ถ่ฏญ่จ€ๅค„็†็š„ๅญฆไน ่ทฏ็บฟๅ›พ
https://github.com/ailln/nlp-roadmap

natural-language-processing nlp roadmap sequence-labeling word-embedding word-segmentation

Last synced: about 7 hours ago
JSON representation

๐Ÿ—บ๏ธ ไธ€ไธช่‡ช็„ถ่ฏญ่จ€ๅค„็†็š„ๅญฆไน ่ทฏ็บฟๅ›พ

Awesome Lists containing this project

README

        

# Natural Language Processing Roadmap

๐Ÿ—บ๏ธ ไธ€ไธชใ€Œ่‡ช็„ถ่ฏญ่จ€ๅค„็†ใ€็š„**ๅญฆไน ่ทฏ็บฟๅ›พ**ใ€‚

> โš ๏ธ ๆณจๆ„:
>
> 1. ่ฟ™ไธช้กน็›ฎๅŒ…ๅซไธ€ไธชๅไธบ `PCB` ็š„ๅฐๅฎž้ชŒ๏ผŒ่ฟ™ไธช็š„ PCB ไธๆ˜ฏๅฐๅˆท็”ต่ทฏๆฟ `Printed Circuit Board`๏ผŒไนŸไธๆ˜ฏ่ฟ›็จ‹ๆŽงๅˆถๅ— `Process Control Block`๏ผŒ่€Œๆ˜ฏ `Paper Code Blog` ็š„็ผฉๅ†™ใ€‚ๆˆ‘่ฎคไธบ `่ฎบๆ–‡`ใ€`ไปฃ็ ` ๅ’Œ `ๅšๅฎข` ่ฟ™ไธ‰ไธชไธœ่ฅฟ๏ผŒๅฏไปฅ่ฎฉๆˆ‘ไปฌๅ…ผ้กพ็†่ฎบๅ’Œๅฎž่ทตๅŒๆ—ถ๏ผŒๅฟซ้€ŸๅœฐๆŽŒๆก็Ÿฅ่ฏ†็‚น๏ผ
>
> 2. ๆฏ็ฏ‡่ฎบๆ–‡ๅŽ้ข็š„ๆ˜Ÿๆ˜Ÿไธชๆ•ฐไปฃ่กจ่ฎบๆ–‡็š„้‡่ฆๆ€ง๏ผˆ*ไธป่ง‚ๆ„่ง๏ผŒไป…ไพ›ๅ‚่€ƒ*๏ผ‰ใ€‚
> 1. ๐ŸŒŸ: ไธ€่ˆฌ๏ผ›
> 2. ๐ŸŒŸ๐ŸŒŸ: ้‡่ฆ๏ผ›
> 3. ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ: ้žๅธธ้‡่ฆใ€‚

## 1 ๅˆ†่ฏ `Word Segmentation`

**่ฏๆ˜ฏ่ƒฝๅคŸ็‹ฌ็ซ‹ๆดปๅŠจ็š„ๆœ€ๅฐ่ฏญ่จ€ๅ•ไฝใ€‚** ๅœจ่‡ช็„ถ่ฏญ่จ€ๅค„็†ไธญ๏ผŒ้€šๅธธ้ƒฝๆ˜ฏไปฅ่ฏไฝœไธบๅŸบๆœฌๅ•ไฝ่ฟ›่กŒๅค„็†็š„ใ€‚็”ฑไบŽ่‹ฑๆ–‡ๆœฌ่บซๅ…ทๆœ‰ๅคฉ็”Ÿ็š„ไผ˜ๅŠฟ๏ผŒไปฅ็ฉบๆ ผๅˆ’ๅˆ†ๆ‰€ๆœ‰่ฏใ€‚่€Œไธญๆ–‡็š„่ฏไธŽ่ฏไน‹้—ดๆฒกๆœ‰ๆ˜Žๆ˜พ็š„ๅˆ†ๅ‰ฒๆ ‡่ฎฐ๏ผŒๆ‰€ไปฅๅœจๅšไธญๆ–‡่ฏญ่จ€ๅค„็†ๅ‰็š„้ฆ–่ฆไปปๅŠก๏ผŒๅฐฑๆ˜ฏๆŠŠ่ฟž็ปญไธญๆ–‡ๅฅๅญๅˆ†ๅ‰ฒๆˆใ€Œ่ฏๅบๅˆ—ใ€ใ€‚่ฟ™ไธชๅˆ†ๅ‰ฒ็š„่ฟ‡็จ‹ๅฐฑๅซ**ๅˆ†่ฏ**ใ€‚[ไบ†่งฃๆ›ดๅคš](https://www.v2ai.cn/2018/04/26/nature-language-processing/2-word-segmentation/)

### ็ปผ่ฟฐ

- ๆฑ‰่ฏญๅˆ†่ฏๆŠ€ๆœฏ็ปผ่ฟฐ [{Paper}](http://www.lis.ac.cn/CN/article/downloadArticleFile.do?attachType=PDF&id=9402) ๐ŸŒŸ
- ๅ›ฝๅ†…ไธญๆ–‡่‡ชๅŠจๅˆ†่ฏๆŠ€ๆœฏ็ ”็ฉถ็ปผ่ฟฐ [{Paper}](http://www.lis.ac.cn/CN/article/downloadArticleFile.do?attachType=PDF&id=11361) ๐ŸŒŸ
- ๆฑ‰่ฏญ่‡ชๅŠจๅˆ†่ฏ็š„็ ”็ฉถ็Žฐ็ŠถไธŽๅ›ฐ้šพ [{Paper}](http://sourcedb.ict.cas.cn/cn/ictthesis/200907/P020090722605434114544.pdf) ๐ŸŒŸ๐ŸŒŸ
- ๆฑ‰่ฏญ่‡ชๅŠจๅˆ†่ฏ็ ”็ฉถ่ฏ„่ฟฐ [{Paper}](http://59.108.48.5/course/mining/12-13spring/%E5%8F%82%E8%80%83%E6%96%87%E7%8C%AE/02-01%E6%B1%89%E8%AF%AD%E8%87%AA%E5%8A%A8%E5%88%86%E8%AF%8D%E7%A0%94%E7%A9%B6%E8%AF%84%E8%BF%B0.pdf) ๐ŸŒŸ๐ŸŒŸ
- ไธญๆ–‡ๅˆ†่ฏๅๅนดๅˆๅ›ž้กพ: 2007-2017 [{Paper}](https://arxiv.org/pdf/1901.06079.pdf) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- chinese-word-segmentation [{Code}](https://github.com/Ailln/chinese-word-segmentation)
- ๆทฑๅบฆๅญฆไน ไธญๆ–‡ๅˆ†่ฏ่ฐƒ็ ” [{Blog}](http://www.hankcs.com/nlp/segment/depth-learning-chinese-word-segmentation-survey.html)

## 2 ่ฏๅตŒๅ…ฅ `Word Embedding`

**่ฏๅตŒๅ…ฅ**ๅฐฑๆ˜ฏๆ‰พๅˆฐไธ€ไธชๆ˜ ๅฐ„ๆˆ–่€…ๅ‡ฝๆ•ฐ๏ผŒ็”Ÿๆˆๅœจไธ€ไธชๆ–ฐ็š„็ฉบ้—ดไธŠ็š„่กจ็คบ๏ผŒ่ฏฅ่กจ็คบ่ขซ็งฐไธบใ€Œๅ•่ฏ่กจ็คบใ€ใ€‚[ไบ†่งฃๆ›ดๅคš](https://www.v2ai.cn/2018/08/27/nature-language-processing/6-word-embedding/)

### ็ปผ่ฟฐ

- Word Embeddings: A Survey [{Paper}](https://arxiv.org/pdf/1901.09069.pdf) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- Visualizing Attention in Transformer-Based Language Representation Models [{Paper}](https://arxiv.org/pdf/1904.02679.pdf) ๐ŸŒŸ๐ŸŒŸ
- **PTMs**: Pre-trained Models for Natural Language Processing: A Survey [{Paper}](https://arxiv.org/pdf/2003.08271.pdf) [{Blog}](https://zhuanlan.zhihu.com/p/115014536) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- Efficient Transformers: A Survey [{Paper}](https://arxiv.org/pdf/2009.06732.pdf) ๐ŸŒŸ๐ŸŒŸ
- A Survey of Transformers [{Paper}](https://arxiv.org/pdf/2106.04554.pdf) ๐ŸŒŸ๐ŸŒŸ
- Pre-Trained Models: Past, Present and Future [{Paper}](https://arxiv.org/pdf/2106.07139.pdf) ๐ŸŒŸ๐ŸŒŸ
- Pretrained Language Models for Text Generation: A Survey [{Paper}](https://arxiv.org/pdf/2105.10311.pdf) ๐ŸŒŸ
- A Practical Survey on Faster and Lighter Transformers [{Paper}](https://arxiv.org/pdf/2103.14636.pdf) ๐ŸŒŸ
- The NLP Cookbook: Modern Recipes for Transformer based Deep Learning Architectures [{Paper}](https://arxiv.org/pdf/2104.10640.pdf) ๐ŸŒŸ๐ŸŒŸ

### ๆ ธๅฟƒ

- **NNLM**: A Neural Probabilistic Language Model [{Paper}](http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf) [{Code}](https://github.com/FuYanzhe2/NNLM) [{Blog}](https://zhuanlan.zhihu.com/p/21240807) ๐ŸŒŸ
- **W2V**: Efficient Estimation of Word Representations in Vector Space [{Paper}](https://arxiv.org/abs/1301.3781) ๐ŸŒŸ๐ŸŒŸ
- **Glove**: Global Vectors for Word Representation [{Paper}](https://nlp.stanford.edu/pubs/glove.pdf) ๐ŸŒŸ๐ŸŒŸ
- **CharCNN**: Character-level Convolutional Networks for Text Classification [{Paper}](https://arxiv.org/pdf/1509.01626.pdf) [{Blog}](https://zhuanlan.zhihu.com/p/51698513) ๐ŸŒŸ
- **ULMFiT**: Universal Language Model Fine-tuning for Text Classification [{Paper}](https://arxiv.org/pdf/1801.06146.pdf) ๐ŸŒŸ
- **SiATL**: An Embarrassingly Simple Approach for Transfer Learning from Pretrained Language Models [{Paper}](https://www.aclweb.org/anthology/N19-1213.pdf) ๐ŸŒŸ
- **FastText**: Bag of Tricks for Efficient Text Classification [{Paper}](https://arxiv.org/pdf/1607.01759.pdf) ๐ŸŒŸ๐ŸŒŸ
- **CoVe**: Learned in Translation: Contextualized Word Vectors [{Paper}](https://arxiv.org/pdf/1708.00107.pdf) ๐ŸŒŸ
- **ELMo**: Deep contextualized word representations [{Paper}](https://arxiv.org/pdf/1802.05365.pdf) ๐ŸŒŸ๐ŸŒŸ
- **Transformer**: Attention is All you Need [{Paper}](https://arxiv.org/pdf/1706.03762.pdf) [{Code}](https://github.com/tensorflow/tensor2tensor) [{Blog}](http://jalammar.github.io/illustrated-transformer/) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- **GPT**: Improving Language Understanding by Generative Pre-Training [{Paper}](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) ๐ŸŒŸ
- **GPT2**: Language Models are Unsupervised Multitask Learners [{Paper}](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) [{Code}](https://github.com/openai/gpt-2) [{Blog}](https://openai.com/blog/better-language-models/) ๐ŸŒŸ๐ŸŒŸ
- **GPT3**: Language Models are Few-Shot Learners [{Paper}](https://arxiv.org/pdf/2005.14165.pdf) [{Code}](https://github.com/openai/gpt-3) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- **GPT4**: GPT-4 Technical Report [{Paper}](https://arxiv.org/pdf/2303.08774.pdf) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- **BERT**: Pre-training of Deep Bidirectional Transformers for Language Understanding [{Paper}](https://arxiv.org/pdf/1810.04805.pdf) [{Code}](https://github.com/google-research/bert) [{Blog}](https://zhuanlan.zhihu.com/p/49271699) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- **UniLM**: Unified Language Model Pre-training for Natural Language Understanding and Generation [{Paper}](https://arxiv.org/pdf/1905.03197.pdf) [{Code}](https://github.com/microsoft/unilm) [{Blog}](https://zhuanlan.zhihu.com/p/68327602) ๐ŸŒŸ๐ŸŒŸ
- **T5**: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer [{Paper}](https://arxiv.org/pdf/1910.10683.pdf) [{Code}](https://github.com/google-research/text-to-text-transfer-transformer) [{Blog}](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) ๐ŸŒŸ
- **ERNIE**(Baidu): Enhanced Representation through Knowledge Integration [{Paper}](https://arxiv.org/pdf/1904.09223.pdf) [{Code}](https://github.com/PaddlePaddle/ERNIE) ๐ŸŒŸ
- **ERNIE**(Tsinghua): Enhanced Language Representation with Informative Entities [{Paper}](https://arxiv.org/pdf/1905.07129.pdf) [{Code}](https://github.com/thunlp/ERNIE) ๐ŸŒŸ
- **RoBERTa**: A Robustly Optimized BERT Pretraining Approach [{Paper}](https://arxiv.org/pdf/1907.11692.pdf) ๐ŸŒŸ
- **ALBERT**: A Lite BERT for Self-supervised Learning of Language Representations [{Paper}](https://arxiv.org/pdf/1909.11942.pdf) [{Code}](https://github.com/google-research/ALBERT) ๐ŸŒŸ๐ŸŒŸ
- **TinyBERT**: Distilling BERT for Natural Language Understanding [{Paper}](https://arxiv.org/pdf/1909.10351.pdf) ๐ŸŒŸ๐ŸŒŸ
- **FastFormers**: Highly Efficient Transformer Models for Natural Language Understanding [{Paper}](https://arxiv.org/pdf/2010.13382.pdf) [{Code}](https://github.com/microsoft/fastformers) ๐ŸŒŸ๐ŸŒŸ

### ๅ…ถไป–

- word2vec Parameter Learning Explained [{Paper}](https://arxiv.org/pdf/1411.2738.pdf) ๐ŸŒŸ๐ŸŒŸ
- Semi-supervised Sequence Learning [{Paper}](https://arxiv.org/pdf/1511.01432.pdf) ๐ŸŒŸ๐ŸŒŸ
- BERT Rediscovers the Classical NLP Pipeline [{Paper}](https://arxiv.org/pdf/1905.05950.pdf) ๐ŸŒŸ
- Pre-trained Languge Model Papers [{Blog}](https://github.com/thunlp/PLMpapers)
- HuggingFace Transformers [{Code}](https://github.com/huggingface/transformers)
- Fudan FastNLP [{Code}](https://github.com/fastnlp/fastNLP)

## 3 ๆ–‡ๆœฌๅˆ†็ฑป `Text Classification`

### ็ปผ่ฟฐ

- A Survey on Text Classification: From Shallow to Deep Learning [{Paper}](https://arxiv.org/pdf/2008.00364.pdf) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- Deep Learning Based Text Classification: A Comprehensive Review [{Paper}](https://arxiv.org/pdf/2004.03705.pdf) ๐ŸŒŸ๐ŸŒŸ

### CNN

- **TextCNN**:Convolutional Neural Networks for Sentence Classification [{Paper}](https://arxiv.org/pdf/1408.5882.pdf) [{Code}](https://github.com/dennybritz/cnn-text-classification-tf) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- Convolutional Neural Networks for Text Categorization: Shallow Word-level vs. Deep Character-level [{Paper}](https://arxiv.org/pdf/1609.00718.pdf) ๐ŸŒŸ
- **DPCNN**: Deep Pyramid Convolutional Neural Networks for Text Categorization [{Paper}](https://www.aclweb.org/anthology/P17-1052.pdf) [{Code}](https://github.com/Cheneng/DPCNN) ๐ŸŒŸ๐ŸŒŸ

## 4 ๅบๅˆ—ๆ ‡ๆณจ `Sequence Labeling`

### ็ปผ่ฟฐ

- Sequence Labeling ็š„ๅ‘ๅฑ•ๅฒ๏ผˆDNNs+CRF๏ผ‰[{Blog}](https://zhuanlan.zhihu.com/p/34828874)

### Bi-LSTM + CRF

- End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF [{Paper}](https://www.aclweb.org/anthology/P16-1101) ๐ŸŒŸ๐ŸŒŸ

- pytorch_NER_BiLSTM_CNN_CRF [{Code}](https://github.com/bamtercelboo/pytorch_NER_BiLSTM_CNN_CRF)
- NN_NER_tensorFlow [{Code}](https://github.com/LopezGG/NN_NER_tensorFlow)
- End-to-end-Sequence-Labeling-via-Bi-directional-LSTM-CNNs-CRF-Tutorial [{Code}](https://github.com/jayavardhanr/End-to-end-Sequence-Labeling-via-Bi-directional-LSTM-CNNs-CRF-Tutorial)
- Bi-directional LSTM-CNNs-CRF [{Code}](https://zhuanlan.zhihu.com/p/30791481)

### ๅ…ถไป–

- Sequence to Sequence Learning with Neural Networks [{Paper}](https://proceedings.neurips.cc/paper/2014/file/a14ac55a4f27472c5d894ec1c3c743d2-Paper.pdf) ๐ŸŒŸ
- Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks [{Paper}](https://arxiv.org/pdf/1506.03099.pdf) ๐ŸŒŸ

## 5 ๅฏน่ฏ็ณป็ปŸ `Dialogue Systems`

### ็ปผ่ฟฐ

- A Survey on Dialogue Systems: Recent Advances and New Frontiers [{Paper}](https://arxiv.org/pdf/1711.01731v1.pdf) [{Blog}](https://zhuanlan.zhihu.com/p/45210996) ๐ŸŒŸ๐ŸŒŸ
- ๅฐๅ“ฅๅ“ฅ๏ผŒๆฃ€็ดขๅผchatbotไบ†่งฃไธ€ไธ‹๏ผŸ [{Blog}](https://mp.weixin.qq.com/s/yC8uYwti9Meyt83xkmbmcg) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- Recent Neural Methods on Slot Filling and Intent Classification for Task-Oriented Dialogue Systems: A Survey [{Paper}](https://arxiv.org/pdf/2011.00564.pdf) ๐ŸŒŸ๐ŸŒŸ

### Open Domain Dialogue Systems

- **HERD**: Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models [{Paper}](https://arxiv.org/pdf/1507.04808v3.pdf) [{Code}](https://github.com/hsgodhia/hred) ๐ŸŒŸ๐ŸŒŸ
- Adversarial Learning for Neural Dialogue Generation [{Paper}](https://arxiv.org/pdf/1701.06547.pdf) [{Code}](https://github.com/liuyuemaicha/Adversarial-Learning-for-Neural-Dialogue-Generation-in-Tensorflow) [{Blog}](https://blog.csdn.net/liuyuemaicha/article/details/60581187) ๐ŸŒŸ๐ŸŒŸ

### Task Oriented Dialogue Systems

- **Joint NLU**: Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling [{Paper}](https://arxiv.org/pdf/1609.01454.pdf) [{Code}](https://github.com/Ailln/chatbot) ๐ŸŒŸ๐ŸŒŸ
- BERT for Joint Intent Classification and Slot Filling [{Paper}](https://arxiv.org/pdf/1902.10909.pdf) ๐ŸŒŸ
- Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures [{Paper}](https://www.aclweb.org/anthology/P18-1133.pdf) [{Code}](https://github.com/WING-NUS/sequicity) ๐ŸŒŸ๐ŸŒŸ
- Attention with Intention for a Neural Network Conversation Model [{Paper}](https://arxiv.org/pdf/1510.08565.pdf) ๐ŸŒŸ
- **REDP**: Few-Shot Generalization Across Dialogue Tasks [{Paper}](https://arxiv.org/pdf/1811.11707.pdf) [{Blog}](http://www.xuwei.io/2019/03/18/%E3%80%8Afew-shot-generalization-across-dialogue-tasks%E3%80%8B%E8%AE%BA%E6%96%87%E7%AC%94%E8%AE%B0/) ๐ŸŒŸ๐ŸŒŸ
- **TEDP**: Dialogue Transformers [{Paper}](https://arxiv.org/pdf/1910.00486.pdf) [{Code}](https://github.com/RasaHQ/TED-paper) [{Blog}](https://zhuanlan.zhihu.com/p/336977835) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ

### Conversational Response Selection

- Multi-view Response Selection for Human-Computer Conversation [{Paper}](https://aclweb.org/anthology/D16-1036.pdf) ๐ŸŒŸ๐ŸŒŸ
- **SMN**: Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots [{Paper}](https://www.aclweb.org/anthology/P17-1046.pdf) [{Code}](https://github.com/MarkWuNLP/MultiTurnResponseSelection) [{Blog}](https://zhuanlan.zhihu.com/p/65062025) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ:
- **DUA**: Modeling Multi-turn Conversation with Deep Utterance Aggregation [{Paper}](https://www.aclweb.org/anthology/C18-1317.pdf) [{Code}](https://github.com/cooelf/DeepUtteranceAggregation) [{Blog}](https://zhuanlan.zhihu.com/p/60618158) ๐ŸŒŸ๐ŸŒŸ
- **DAM**: Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network [{Paper}](https://www.aclweb.org/anthology/P18-1103.pdf) [{Code}](https://github.com/baidu/Dialogue/tree/master/DAM) [{Blog}](https://zhuanlan.zhihu.com/p/65143297) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- **IMN**: Interactive Matching Network for Multi-Turn Response Selection in Retrieval-Based Chatbots [{Paper}](https://arxiv.org/pdf/1901.01824.pdf) [{Code}](https://github.com/JasonForJoy/IMN) [{Blog}](https://zhuanlan.zhihu.com/p/68590678) ๐ŸŒŸ๐ŸŒŸ
- Dialogue Transformers [{Paper}](https://arxiv.org/pdf/1910.00486.pdf) ๐ŸŒŸ๐ŸŒŸ

## 6 ไธป้ข˜ๆจกๅž‹ `Topic Model`

### LDA

- Latent Dirichlet Allocation [{Paper}](https://jmlr.org/papers/volume3/blei03a/blei03a.pdf) [{Blog}](https://arxiv.org/pdf/1908.03142.pdf) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ

## 7 ็Ÿฅ่ฏ†ๅ›พ่ฐฑ `Knowledge Graph`

### ็ปผ่ฟฐ

- Towards a Definition of Knowledge Graphs [{Paper}](http://ceur-ws.org/Vol-1695/paper4.pdf) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ

## 8 ๆ็คบๅญฆไน  `Prompt Learning`

### ็ปผ่ฟฐ

- **PPP**: Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing [{Paper}](https://arxiv.org/pdf/2107.13586.pdf) [{Blog}](https://zhuanlan.zhihu.com/p/395115779) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ

## 9 ๅ›พ็ฅž็ป็ฝ‘็ปœ `Graph Neural Network`

### ็ปผ่ฟฐ

- Graph Neural Networks for Natural Language Processing: A Survey [{Paper}](https://arxiv.org/pdf/2106.06090.pdf) ๐ŸŒŸ๐ŸŒŸ

## 10 ๅฅๅตŒๅ…ฅ `Sentence Embedding`

### ๆ ธๅฟƒ

- **InferSent**: Supervised Learning of Universal Sentence Representations from Natural Language Inference Data [{Paper}](https://arxiv.org/pdf/1705.02364.pdf) [{Code}](https://github.com/facebookresearch/InferSent) ๐ŸŒŸ๐ŸŒŸ
- **Sentence-BERT**: Sentence Embeddings using Siamese BERT-Networks [{Paper}](https://arxiv.org/pdf/1908.10084.pdf) [{Code}](https://github.com/UKPLab/sentence-transformers) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ
- **BERT-flow**: On the Sentence Embeddings from Pre-trained Language Models [{Paper}](https://arxiv.org/pdf/2011.05864.pdf) [{Code}](https://github.com/bohanli/BERT-flow) [{Blog}](https://zhuanlan.zhihu.com/p/337134133) ๐ŸŒŸ๐ŸŒŸ
- **SimCSE**: Simple Contrastive Learning of Sentence Embeddings [{Paper}](https://arxiv.org/pdf/2104.08821.pdf) [{Code}](https://github.com/princeton-nlp/SimCSE) ๐ŸŒŸ๐ŸŒŸ๐ŸŒŸ

## ๅ‚่€ƒ

- [thunlp/NLP-THU](https://github.com/thunlp/NLP-THU)
- [iwangjian/Paper-Reading](https://github.com/iwangjian/Paper-Reading)
- [thunlp/PromptPapers](https://github.com/thunlp/PromptPapers)