https://github.com/cmavro/PackLLM
Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization
https://github.com/cmavro/PackLLM
in-context-learning large-language-models mixture-of-experts model-fusion
Last synced: 29 days ago
JSON representation
Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization
- Host: GitHub
- URL: https://github.com/cmavro/PackLLM
- Owner: cmavro
- Created: 2024-04-17T15:55:47.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-25T02:34:06.000Z (about 1 year ago)
- Last Synced: 2024-04-25T03:34:43.726Z (about 1 year ago)
- Topics: in-context-learning, large-language-models, mixture-of-experts, model-fusion
- Language: Python
- Homepage: https://arxiv.org/abs/2404.11531
- Size: 169 KB
- Stars: 2
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Awesome-LLM-Ensemble - [Official
README
# Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization
We introduce Pack of LLMs (PackLLM), an effective method for test-time fusion that leverages each LLM’s expertise, given an input prompt. PackLLM performs model fusion by solving an optimization problem for determining each LLM’s importance, so that perplexity over the input prompt is minimized. First, our simple PackLLM-sim variant validates that perplexity is a good indicator for measuring each LLM’s expertise. Second, our PackLLM-opt variant approximately solves the perplexity minimization problem via a greedy algorithm. The derived importance weights are used to combine the LLMs during inference.
## Directory Layout
```bash
./
|---- downstream_tasks/ # downstream task experiments with PackLLM
|---- language_modeling_tasks/ # to be updated soon.
```## Citation
```
@article{mavromatis2024packllm,
title={Pack of LLMs: Model Fusion at Test-Time via Perplexity Optimization},
author={Mavromatis, Costas and Karypis, Petros and Karypis, George},
journal={arXiv preprint arXiv:2404.11531},
year={2024}
}```