https://github.com/huawei-noah/efficient-nlp
https://github.com/huawei-noah/efficient-nlp
Last synced: 6 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/huawei-noah/efficient-nlp
- Owner: huawei-noah
- Created: 2021-12-03T01:14:42.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2024-06-04T18:53:28.000Z (over 1 year ago)
- Last Synced: 2025-03-23T23:26:41.333Z (7 months ago)
- Language: Python
- Size: 232 KB
- Stars: 92
- Watchers: 6
- Forks: 14
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
--------------------------------------------------------------------------------
# KD-NLP
This repository is a collection of Knowledge Distillation (KD) methods implemented by the Huawei Montreal NLP team.Included Projects
* [**MATE-KD**](MATE-KD)
* KD for model compression and study of use of adversarial training to improve student accuracy using just the logits of the teacher as in standard KD.
* [MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation](https://arxiv.org/abs/2105.05912v1)
* [**Combined-KD**](Combined-KD)
* Proposition of Combined-KD (ComKD) that takes advantage of data-augmentation and progressive training.
* [How to Select One Among All? An Extensive Empirical Study Towards the Robustness of Knowledge Distillation in Natural Language Understanding](https://arxiv.org/abs/2109.05696v1)
* [**Minimax-kNN**](Minimax-kNN)
* A sample-efficient semi-supervised kNN data augmentation technique.
* [Not Far Away, Not So Close: Sample Efficient Nearest Neighbour Data Augmentation via MiniMax](https://aclanthology.org/2021.findings-acl.309/)
* [**Glitter**](Glitter)
* A universal sample-efficient framework for incorporating augmented data into training.
* [When Chosen Wisely, More Data Is What You Need: A Universal Sample-Efficient Strategy For Data Augmentation](https://aclanthology.org/2022.findings-acl.84/)# License
This project's license is under the Apache 2.0 license.