https://github.com/alexdremov/adalayers
Frozen transformer models adaptive layers selection for downstream tasks efficient solving
https://github.com/alexdremov/adalayers
finetune finetuning pytorch transfer-learning transformer
Last synced: 7 months ago
JSON representation
Frozen transformer models adaptive layers selection for downstream tasks efficient solving
- Host: GitHub
- URL: https://github.com/alexdremov/adalayers
- Owner: alexdremov
- License: mit
- Created: 2024-02-03T16:28:05.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-02-03T14:24:36.000Z (8 months ago)
- Last Synced: 2025-02-03T15:30:05.417Z (8 months ago)
- Topics: finetune, finetuning, pytorch, transfer-learning, transformer
- Language: Python
- Homepage: https://github.com/alexdremov/adalayers/blob/main/text_en.pdf
- Size: 3.02 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Adalayers
Frozen transformer models adaptive layers selection
for downstream tasks efficient solving.---
## Best results
> **Note:** results are highly dependent on base model domain.
> Clearly, some task-specific model could have been used to
> achieve even higher scores. Though, such
> comparison would not be fair.
>
> As you see, currently just one frozen base model with proposed algorithm achieves great results| Dataset | Base model | Score |
|:-----------:|:--------------:|---------:|
| IMDB | RoBERTa-large | 96.1% acc
*(SOTA level)* |
| CoLA | RoBERTa-large | 83.6% acc
*(SOTA level)* |
| CoNLL | RoBERTa-large | 89.4% f1 |## Paper
[Using Internal Representations of Frozen Transformer Models to Solve Downstream Tasks](https://github.com/alexdremov/adalayers/blob/main/text_en.pdf)
## Problem
Let's consider the case when you already have a functioning SOTA-level large transformer model and you need to solve a different task on the same data.
Aka speech recognition + emotion recognition, text summarization + NER, etc.One possible solution is to use a second model.
However, deploying a second transformer model requires lots of resources.
Combining two tasks into a single model training is not always feasible and may
deteriorate main task metrics.## Solution
Let's reuse transformer hidden states! It is well-known that different
layers of the transformer model extract features of different levels. Therefore,
if we effectively combine hidden features, we could achieve good results.Moreover, in such a case, the base model stays intact, and as we reuse its computations,
the proposed algorithm is highly computationally efficient.The general idea is presented in the image
Code for F can be found in [`adalayers/models/ada_layers_base.py`](https://github.com/alexdremov/adalayers/blob/main/adalayers/models/ada_layers_base.py).
All models are implemented with Huggingface interfaces.## Launch
Training is omegaconf+hydra configurable. Configs can be found in [`configs`](https://github.com/alexdremov/adalayers/tree/main/configs).
The environment is poetry-controlled. You can set it up by calling `poetry install`
You can launch simple training by calling `adalayers/training/run.py` like
```
python adalayers/training/run.py --config-name adalayers_imdb.yaml
```