{"id":14991775,"url":"https://optimalscale.github.io/LMFlow/","last_synced_at":"2025-09-25T14:30:30.985Z","repository":{"id":148379934,"uuid":"619825247","full_name":"OptimalScale/LMFlow","owner":"OptimalScale","description":"An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.","archived":false,"fork":false,"pushed_at":"2024-08-10T06:24:52.000Z","size":28326,"stargazers_count":8170,"open_issues_count":64,"forks_count":818,"subscribers_count":73,"default_branch":"main","last_synced_at":"2024-08-11T10:22:07.578Z","etag":null,"topics":["chatgpt","deep-learning","instruction-following","language-model","pretrained-models","pytorch","transformer"],"latest_commit_sha":null,"homepage":"https://optimalscale.github.io/LMFlow/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OptimalScale.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-03-27T13:56:29.000Z","updated_at":"2024-08-12T17:57:26.617Z","dependencies_parsed_at":"2023-09-22T23:11:03.238Z","dependency_job_id":"4a8d501f-81c6-4872-945d-b2e1fbe5eff5","html_url":"https://github.com/OptimalScale/LMFlow","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OptimalScale%2FLMFlow","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OptimalScale%2FLMFlow/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OptimalScale%2FLMFlow/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OptimalScale%2FLMFlow/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OptimalScale","download_url":"https://codeload.github.com/OptimalScale/LMFlow/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":234200151,"owners_count":18795139,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chatgpt","deep-learning","instruction-following","language-model","pretrained-models","pytorch","transformer"],"created_at":"2024-09-24T14:59:48.176Z","updated_at":"2025-09-25T14:30:30.978Z","avatar_url":"https://github.com/OptimalScale.png","language":"Python","readme":"\u003cp align=\"center\" width=\"50%\"\u003e\n\u003cimg src=\"docs/assets/logo.png\" alt=\"LMFlow\" style=\"width: 50%; min-width: 200px; display: block; margin: auto; background-color: transparent;\"\u003e\n\u003c/p\u003e\n\n# LMFlow\n\n\u003ch4 align=\"center\"\u003e\n    \u003cp\u003e\n        \u003cb\u003eEnglish\u003c/b\u003e |\n        \u003ca href=\"https://github.com/OptimalScale/LMFlow/blob/main/readme/README_zh-hans.md\"\u003e简体中文\u003c/a\u003e |\n        \u003ca href=\"https://github.com/OptimalScale/LMFlow/blob/main/readme/README_es.md\"\u003eEspañol\u003c/a\u003e |\n        \u003ca href=\"https://github.com/OptimalScale/LMFlow/blob/main/readme/README_jp.md\"\u003e日本語\u003c/a\u003e |\n        \u003ca href=\"https://github.com/OptimalScale/LMFlow/blob/main/readme/README_ko.md\"\u003e한국어\u003c/a\u003e |\n        \u003ca href=\"https://github.com/OptimalScale/LMFlow/blob/main/readme/README_hindi.md\"\u003eहिंदी\u003c/a\u003e\n    \u003cp\u003e\n\u003c/h4\u003e\n\n[![Website](https://img.shields.io/badge/Website-Demo-20B2AA.svg)](https://lmflow.com)\n[![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/OptimalScale/LMFlow/blob/main/LICENSE)\n[![Python 3.9+](https://img.shields.io/badge/Python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/)\n[![Doc](https://img.shields.io/badge/Website-Doc-ff69b4.svg)](https://optimalscale.github.io/LMFlow/)\n[![Embark](https://img.shields.io/badge/Discord-LMFlow-%237289da.svg?logo=discord)](https://discord.gg/u9VJNpzhvA)\n[![slack badge](https://img.shields.io/badge/Slack-Join-blueviolet?logo=slack\u0026amp)](https://join.slack.com/t/lmflow/shared_invite/zt-1wju9nicy-woXbNtS~5MavHSAtiMxmxQ)\n[![WeChat badge](https://img.shields.io/badge/WeChat-Join-brightgreen?logo=wechat\u0026amp)](https://ibb.co/ZhM4hhn)\n\nAn extensible, convenient, and efficient toolbox for finetuning large machine learning models, designed to be user-friendly, speedy and reliable, and accessible to the entire community.\n\n\u003cp align=\"center\" width=\"100%\"\u003e\n\u003cimg src=\"docs/assets/features.png\" alt=\"LMFlow-features\" style=\"width: 100%; min-width: 300px; display: block; margin: auto;\"\u003e\n\u003c/p\u003e\n\n## Latest News\n\u003e [!IMPORTANT]\n\u003e * :exclamation: [2025-07-09] We have a major update to LMFlow with full Accelerate support and extensive streamlining. If you're looking for the previous version, please use `git checkout v0.0.10`, or check out the [v0.0.10 branch](https://github.com/OptimalScale/LMFlow/tree/v0.0.10). View all releases [here](https://github.com/OptimalScale/LMFlow/tags).\n\n* [2024-12-02] Support [Hymba](https://github.com/NVlabs/hymba), a new family of small language models featuring a hybrid-head parallel architecture. Check out [Post-training Hymba](https://github.com/OptimalScale/LMFlow/tree/main/experimental/Hymba) for more details.\n* [2024-07-01] 🏆 LMFlow receives the [**Best Demo Paper Award**](https://docs.google.com/presentation/d/1TVDooAZqkNObz5ysVhDFtqnnVHR-u8wqYvgix-gzPMs/edit#slide=id.g2e55907bbcc_0_70) at **NAACL 2024**! 🎉\n* [2024-06-30] Expanding Optimization Options! We now support custom optimizer training with a variety of optimizers. Dive into the details and try out the new features with our updated script at [custom_optimizers](https://github.com/OptimalScale/LMFlow/blob/main/scripts/run_finetune_with_custom_optim.sh).\n* [2024-04-25] :rocket: Support conversation template! We've preset the latest [Llama-3](https://huggingface.co/meta-llama/Meta-Llama-3-70B) and [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) conversation templates as well as some frequently used templates such as `chatml` (see all templates [here](https://optimalscale.github.io/LMFlow/examples/DATASETS.html#conversation-template)), and we are working on adding more preset templates. Adding corresponding `--conversation_template` in the shell script and you are all set! :rocket:\n\n\u003cdetails\u003e \u003csummary\u003eMore news...\u003c/summary\u003e\n\n* [2024-03-27] Support [LISA](https://arxiv.org/abs/2403.17919), enabling 7B training in 24G memory without offloading! \n* [2023-09-11] Support [speculative decoding](https://arxiv.org/abs/2211.17192). Check out [speculative_decoding](https://github.com/OptimalScale/LMFlow/blob/main/scripts/speculative_decoding/README.md) for the usage and acceleration details.\n* [2023-08-14] Support long context inference with position interpolation (Linear \u0026 NTK scaling ) for LLaMA models. Check out [postion_interpolation](https://github.com/OptimalScale/LMFlow/blob/main/readme/Position_Interpolation.md) for more details.\n* [2023-08-07] Support [Flash Attention-2](https://crfm.stanford.edu/2023/07/17/flash2.html). Check out [flash_attention](https://github.com/OptimalScale/LMFlow/blob/main/readme/flash_attn2.md) for more details.\n* [2023-08-02] Support [Llama2](https://ai.meta.com/llama/), [ChatGLM2](https://huggingface.co/THUDM/chatglm2-6b), and [Baichuan](https://huggingface.co/baichuan-inc/Baichuan-7B) models.\n* [2023-07-23] [LMFlow multimodal chatbot](https://github.com/OptimalScale/LMFlow/blob/main/scripts/run_vis_chatbot_gradio_minigpt4.sh) is now available! Support multimodal inputs of images and texts. [Online Demo](http://multimodal.lmflow.online) is also provided (We hold the service on a single GPU, hence one may experience \"queuing\" or \"application busy\" sometimes when multiple users are accessing at the same time, please wait and attempt again later when such event happens)![image](https://github.com/OptimalScale/LMFlow/blob/rpan-vision-encoder/docs/assets/multimodal-chatbot-demo.gif)\n* [2023-06-22]  [LMFlow paper](https://arxiv.org/abs/2306.12420) is out! Check out our implementation details at https://arxiv.org/abs/2306.12420\n* [2023-06-16] Our finetuned Robin-33B-V2 scored an impressive 64.1 on the Huggingface LLM leaderboard in our offline evaluation, outperforming major open-source LLMs! All checkpoints (7B, 13B, 33B, and 65B) are [released](https://huggingface.co/OptimalScale)! Checkout the performance [here](https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1).\n* [2023-06-07] LMFlow is now officially available on PyPI! Install it with `pip install lmflow-finetune`!\n* [2023-05-30] Release [Robin-13B-v2](https://huggingface.co/OptimalScale/robin-13b-v2-delta) and [Robin-33B-v2](https://huggingface.co/OptimalScale/robin-33b-v2-delta)!\n\n* [2023-05-15] Release [LMFlow-data](http://lmflow.org:5000/lmflow_data.tar.gz), the training dataset of Robin-7B-v2. A new [test data](http://lmflow.org:5000/lmflow_chat_en_dialog_multiturn_single_nll_text2text.tar.gz) is also released.\n* [2023-05-09] Release [Robin-7B-v2](http://lmflow.org:5000/robin-7b-v2-delta.tar.gz), achieving competitive performance on chitchat, commonsense reasoning and instruction-following tasks. Refer to our [comprehensive study](https://medium.com/@hkust.ml/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418).\n* [2023-05-08] Release [LMFlow Benchmark](https://medium.com/@hkust.ml/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418), an automatic evaluation framework for open-source chat-style LLMs. [Benchmark results](https://docs.google.com/spreadsheets/d/1JYh4_pxNzmNA9I0YM2epgRA7VXBIeIGS64gPJBg5NHA/edit#gid=0) on 31 popular models are reported. [Participate in LMFlow Benchmark](https://github.com/OptimalScale/LMFlow#33-lmflow-benchmark).\n* [2023-04-21] Release [Robin-7B](http://lmflow.org:5000/robin-7b.tar.gz) (based on LLaMA-7B), and two models for commercial use: Parakeets-2.7B (based on GPT-NEO-2.7B) and Cokatoo-7B (based on StableLM-7B) [Download here](https://github.com/OptimalScale/LMFlow/tree/main#model-zoo)\n* [2023-04-15] Inference: Support streaming output and ChatGLM.\n* [2023-04-10] We propose a new alignment algorithm: [Reward rAnked FineTuning (RAFT)](https://optimalscale.github.io/LMFlow/examples/raft.html), which is more efficient than conventional (PPO-based) RLHF. [[Paper](https://arxiv.org/abs/2304.06767)]\n* [2023-04-02] [Web service](https://lmflow.com/) is online!\n* [2023-04-01] Release three instruction-tuned checkpoints and three medical checkpoints in [model zoo](https://github.com/OptimalScale/LMFlow#model-zoo): LLaMA-7B-tuned, LLaMA-13B-tuned, LLaMA-33B-tuned, LLaMA-7B-medical, LLaMA-13B-medical, and LLaMA-33B-medical.\n* [2023-03-27] Support full tuning and lora tuning for all decoder models.\n* [2023-03-27] [Tasked tuned model beats ChatGPT on medical domain](https://github.com/OptimalScale/LMFlow#model-performance).\n* [2023-03-27] Release code and checkpoints - [version 0.0.1](https://optimalscale.github.io/LMFlow/)! [Our tasked-tuned model beats ChatGPT on medical domain](https://github.com/OptimalScale/LMFlow#model-performance).\n\n\u003c/details\u003e\n\n## Table of Contents\n\n- [LMFlow](#lmflow)\n  - [Latest News](#latest-news)\n  - [Table of Contents](#table-of-contents)\n  - [Quick Start](#quick-start)\n    - [Setup](#setup)\n    - [Prepare Dataset](#prepare-dataset)\n    - [Finetuning](#finetuning)\n      - [Estimated Hardware Requirement](#estimated-hardware-requirement)\n      - [Full Finetuning](#full-finetuning)\n      - [LISA](#lisa)\n      - [LoRA](#lora)\n    - [Inference](#inference)\n    - [Deployment](#deployment)\n    - [Evaluation](#evaluation)\n  - [Supported Features](#supported-features)\n  - [Support](#support)\n  - [License](#license)\n  - [Citation](#citation)\n\n\n## Quick Start\n\n### Setup\n\nOur package has been tested on Linux OS (Ubuntu 20.04). Other OS platforms (MacOS, Windows) are not fully tested, where you may encounter unexpected errors. If you are using LMFlow for the first time, we recommend you to try on a Linux machine or Google Colab.\n\n```bash\ngit clone -b v1.0.0 https://github.com/OptimalScale/LMFlow.git\ncd LMFlow\nconda create -n lmflow python=3.9 -y\nconda activate lmflow\nconda install mpi4py\npip install -e .\n```\n\n\u003cdetails\u003e\u003csummary\u003e Looking for a previous version? \u003c/summary\u003e\n\n```bash\ngit clone -b v0.0.10 https://github.com/OptimalScale/LMFlow.git\ncd LMFlow\nconda create -n lmflow python=3.9 -y\nconda activate lmflow\nconda install mpi4py\npip install -e .\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e For CUDA versions 10.3-11.7 \u003c/summary\u003e\n\n```bash\ngit clone -b v0.0.5 https://github.com/OptimalScale/LMFlow.git\ncd LMFlow\nconda create -n lmflow python=3.9 -y\nconda activate lmflow\nconda install mpi4py\npip install -e .\n```\n\n\u003c/details\u003e\n\n\u003e [!TIP]\n\u003e We use WandB to track and visualize the training process by default. Before running the training scripts, users may need to log in to WandB using the command: \n\u003e\n\u003e```bash\n\u003ewandb login\n\u003e```\n\u003e\n\u003e For detailed instructions, refer to the [WandB Quickstart Guide](https://docs.wandb.ai/quickstart/). Step 1 (registration) and Step 2 (login using your WandB API key) should be sufficient to set up your environment.\n\u003e\n\u003e \u003cdetails\u003e\u003csummary\u003eDisabling wandb\u003c/summary\u003e  \n\u003e\n\u003e One can disable wandb by either:  \n\u003e\n\u003e 1. Adding environment variable before running the training command.\n\u003e\n\u003e```bash\n\u003eexport WANDB_MODE=disabled\n\u003e```\n\u003e\n\u003e 2. OR, specifying the integrations to report the results and logs to. In the training script, add:\n\u003e\n\u003e```bash\n\u003e--report_to none \\\n\u003e```\n\u003e\n\u003e \u003c/details\u003e\n\n### Prepare Dataset\n\nPlease refer to our [doc](https://optimalscale.github.io/LMFlow/examples/DATASETS.html).\n\n### Finetuning\n\n#### Estimated Hardware Requirement\n\n| Method                 | 0.5B |  3B  |  7B  |  14B  |  30B  |  70B  |  `x`B   |\n| ---------------------- | ---- | ---- | ---- | ----- | ----- | ----- | ------- |\n| Full `bf16`/`fp16`     |  9GB | 55GB |120GB | 240GB | 600GB | 1200GB| `18x`GB |\n| LoRA                   |  1GB | 6GB  | 16GB |  32GB |  64GB | 160GB |  `2x`GB |\n| QLoRA `quant_bit=8`    | 0.7GB| 3GB  | 10GB |  20GB |  40GB |   80GB|  `x`GB  |\n| QLoRA `quant_bit=4`    | 0.4GB| 1.5GB|  6GB |  12GB |  24GB |   48GB| `x/2`GB |\n\n\n#### Full Finetuning\n\nFull training updates all the parameters to finetune a language model.\nHere is an example to finetune a GPT-2 base model.\n\n```sh\ncd data \u0026\u0026 ./download.sh alpaca \u0026\u0026 cd -\n\nbash ./scripts/run_finetune.sh \\\n  --model_name_or_path gpt2 \\\n  --dataset_path data/alpaca/train_conversation \\\n  --output_model_path output_models/finetuned_gpt2\n```\n\n\u003e [!TIP]\n\u003e For conversation dataset, specify a conversation template for better performance by adding `--conversation_template` to the command.\n\u003e\n\u003e \u003cdetails\u003e\u003csummary\u003eLlama-3-8B conversation dataset example\u003c/summary\u003e  \n\u003e\n\u003e```bash\n\u003ecd data \u0026\u0026 ./download.sh alpaca \u0026\u0026 cd -\n\u003e\n\u003ebash ./scripts/run_finetune.sh \\\n\u003e  --model_name_or_path meta-llama/Meta-Llama-3-8B \\\n\u003e  --dataset_path data/alpaca/train_conversation \\\n\u003e  --conversation_template llama3 \\\n\u003e  --output_model_path output_models/finetuned_llama3_8b\n\u003e```\n\u003e\n\u003e \u003c/details\u003e\n\n#### LISA\n\n[LISA](https://arxiv.org/abs/2403.17919) is a memory-efficient finetuning algorithm that allows tradeoff between memory and the number of randomly unfreezed layers. This script currently is only tested in single gpus. Please stay tuned for our latest updates :smile:\n\n```sh\ncd data \u0026\u0026 ./download.sh alpaca \u0026\u0026 cd -\n\nbash ./scripts/run_finetune_with_lisa.sh \\\n  --model_name_or_path meta-llama/Llama-2-7b-hf \\\n  --dataset_path data/alpaca/train_conversation \\\n  --output_model_path output_models/finetuned_llama2_7b \\\n  --lisa_activated_layers 1 \\\n  --lisa_interval_steps 20\n```\n\n\u003e [!TIP]\n\u003e \u003cdetails\u003e\u003csummary\u003eLlama-2-7B conversation dataset example\u003c/summary\u003e  \n\u003e\n\u003e```bash\n\u003ecd data \u0026\u0026 ./download.sh alpaca \u0026\u0026 cd -\n\u003e\n\u003ebash ./scripts/run_finetune_with_lisa.sh \\\n\u003e  --model_name_or_path meta-llama/Llama-2-7b-hf \\\n\u003e  --dataset_path data/alpaca/train_conversation \\\n\u003e  --conversation_template llama2 \\\n\u003e  --output_model_path output_models/finetuned_llama2_7b_lisa \\\n\u003e  --lisa_activated_layers 1 \\\n\u003e  --lisa_interval_steps 20\n\u003e```\n\u003e\n\u003e \u003c/details\u003e\n\n#### LoRA\n\nLoRA is a parameter-efficient finetuning algorithm and is more efficient than full finetuning.\n\n```sh\ncd data \u0026\u0026 ./download.sh alpaca \u0026\u0026 cd -\n\nbash ./scripts/run_finetune_with_lora.sh \\\n  --model_name_or_path facebook/galactica-1.3b \\\n  --dataset_path data/alpaca/train_conversation \\\n  --output_lora_path output_models/finetuned_galactica_lora\n```\n\n\u003e [!TIP]\n\u003e \u003cdetails\u003e\u003csummary\u003eLlama-2-7B conversation dataset example\u003c/summary\u003e  \n\u003e\n\u003e```bash\n\u003ecd data \u0026\u0026 ./download.sh alpaca \u0026\u0026 cd -\n\u003e\n\u003ebash ./scripts/run_finetune_with_lora.sh \\\n\u003e  --model_name_or_path meta-llama/Llama-2-7b-hf \\\n\u003e  --dataset_path data/alpaca/train_conversation \\\n\u003e  --conversation_template llama2 \\\n\u003e  --output_model_path output_models/finetuned_llama2_7b_lora \\\n\u003e```\n\u003e\n\u003e \u003c/details\u003e\n\u003e\n\u003e \u003cdetails\u003e\u003csummary\u003eMerge LoRA Weight\u003c/summary\u003e\n\u003e\n\u003eMerge LoRA weight and the base model into one using:  \n\u003e\n\u003e```sh\n\u003ebash ./scripts/run_merge_lora.sh \\\n\u003e  --model_name_or_path Qwen/Qwen1.5-1.8B \\\n\u003e  --lora_model_path output_models/lora \\\n\u003e  --output_model_path output_models/lora_merged \\\n\u003e```\n\u003e\n\u003e\u003c/details\u003e\n\n### Inference\n\nAfter finetuning, you can run the following command to chat with the model.\n```sh\nbash ./scripts/run_chatbot.sh output_models/finetuned_gpt2\n```\n\n\u003e [!TIP]\n\u003e We recommend using vLLM for faster inference.\n\u003e\n\u003e \u003cdetails\u003e\u003csummary\u003eFaster inference using vLLM\u003c/summary\u003e  \n\u003e\n\u003e```bash\n\u003ebash ./scripts/run_vllm_inference.sh \\\n\u003e   --model_name_or_path Qwen/Qwen2-0.5B \\\n\u003e   --dataset_path data/alpaca/test_conversation \\\n\u003e   --output_dir data/inference_results \\\n\u003e```\n\u003e\n\u003e \u003c/details\u003e\n\n### Deployment\n\nIf you want to deploy your own model locally, we provide a gradio-based UI for building chatbots. \nRunning the following command will launch the demo for robin-7b:\n\n```sh\npip install gradio\npython ./examples/chatbot_gradio.py --deepspeed configs/ds_config_chatbot.json --model_name_or_path YOUR-LLAMA  --lora_model_path ./robin-7b --prompt_structure \"A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: {input_text}###Assistant:\"       --end_string \"#\" --max_new_tokens 200\n```\n\n### Evaluation\n\n[LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418) is an automatic evaluation framework for open-source large language models.\nWe use negative log likelihood (NLL) as the metric to evaluate different aspects of a language model: chitchat, commonsense reasoning, and instruction following abilities.\n\nYou can directly run the LMFlow benchmark evaluation to obtain the results to participate in the\n[LLM comparision](https://docs.google.com/spreadsheets/d/1JYh4_pxNzmNA9I0YM2epgRA7VXBIeIGS64gPJBg5NHA/edit?usp=sharing).\nFor example, to run GPT2 XL, one may execute\n\n```sh\nbash ./scripts/run_benchmark.sh --model_name_or_path gpt2-xl\n```\n\n`--model_name_or_path` is required, you may fill in huggingface model name or local model path here.\n\nTo check the evaluation results, you may check `benchmark.log` in `./output_dir/gpt2-xl_lmflow_chat_nll_eval`,\n`./output_dir/gpt2-xl_all_nll_eval` and `./output_dir/gpt2-xl_commonsense_qa_eval`.\n\n## Supported Features\n\n\u003cdetails\u003e \u003csummary\u003eFinetune Acceleration \u0026 Memory Optimization\u003c/summary\u003e\n\n* LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning\n  \n  LISA is a novel and memory-efficient training strategy for large language models that outperforms existing methods like LoRA by selectively freezing layers during optimization. Check out [LISA](https://arxiv.org/abs/2403.17919) for more details.  \n  In LMFLow, activate LISA using `--use_lisa 1` in your training command. Control the number of activation layers with `--lisa_activated_layers 2`, and adjust the freezing layers interval using `--lisa_step_interval 20`. \n\n* LoRA\n  \n  LoRA is a parameter-efficient finetuning algorithm and is more efficient than full finetuning. Check out [finetuning-lora](#finetuning-lora) for more details.\n\n* FlashAttention\n\n  LMFlow supports both FlashAttention-1 and the latest FlashAttention-2. Check out [flash_attention](https://github.com/OptimalScale/LMFlow/blob/main/readme/flash_attn2.md) for more details.\n\n* Gradient Checkpointing\n  \n  [Gradient checkpointing](https://github.com/cybertronai/gradient-checkpointing) is a memory optimization technique that trades compute for memory.\n  It is useful when the model is too large to fit into GPU memory. \n  Use it by just adding `--gradient_checkpointing` to your training command.\n\n* Deepspeed Zero3\n  \n  LMFlow supports [Deepspeed Zero-3 Offload](https://www.deepspeed.ai/2021/03/07/zero3-offload.html). \n  We provide an example [deepspeed config](https://github.com/OptimalScale/LMFlow/blob/main/configs/ds_config_zero3.json), and you can directly use it.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e \u003csummary\u003eInference Acceleration\u003c/summary\u003e\n\n* LLaMA Inference on CPU\n\n  Thanks to the great efforts of [llama.cpp](https://github.com/ggerganov/llama.cpp). It is possible for everyone to run their LLaMA models on CPU by 4-bit quantization. We provide a script to convert LLaMA LoRA weights to `.pt` files. You only need to use `convert-pth-to-ggml.py` in llama.cpp to perform quantization.\n\n* FlashAttention\n\n  LMFlow supports both FlashAttention-1 and the latest FlashAttention-2. Check out [flash_attention](https://github.com/OptimalScale/LMFlow/blob/main/readme/flash_attn2.md) for more details.\n\n* vLLM\n\n  Try vLLM for fast and easy-to-use LLM inference and serving. Thanks for the [great work](https://github.com/vllm-project/vllm)!\n\n\u003c/details\u003e\n\n\u003cdetails\u003e \u003csummary\u003eLong Context\u003c/summary\u003e\n\n* Position Interpolation for LLaMA Models\n\n  Now LMFlow supports the latest Linear \u0026 NTK (Neural Kernel theory) scaling techniques for LLaMA models. Check out [postion_interpolation](https://github.com/OptimalScale/LMFlow/blob/main/readme/Position_Interpolation.md) for more details.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e \u003csummary\u003eModel Customization\u003c/summary\u003e\n\n* Vocabulary Extension\n\n  Now you can train your own sentencepiece tokenizer and merge it with model's origin hf tokenizer. Check out [vocab_extension](https://github.com/OptimalScale/LMFlow/blob/main/scripts/vocab_extension) for more details.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e \u003csummary\u003eMultimodal\u003c/summary\u003e\n\n* Multimodal Chatbot\n\n  LMFlow supports multimodal inputs of images and texts. Check out our [LMFlow multimodal chatbot](https://github.com/OptimalScale/LMFlow/blob/main/scripts/run_vis_chatbot_gradio_minigpt4.sh).\n\n\u003c/details\u003e\n\n\u003cdetails\u003e \u003csummary\u003eCustom Optimization\u003c/summary\u003e\n\n* Custom Optimization\n\n  LMFlow now supports custom optimizer training with a variety of optimizers. Elevate your model's performance with tailored optimization strategies. Dive into the details and try out the new features with our updated script at [custom_optimizers](https://github.com/OptimalScale/LMFlow/blob/main/scripts/run_finetune_with_custom_optim.sh).\n\n  The following table evaluates the performance of custom optimizers in the fine-tuning process of GPT-2 on the Alpaca dataset, emphasizing their individual impacts on the training loss. The specific hyperparameter settings utilize default configurations, which can be customized and adjusted at [custom_optimizers](https://github.com/OptimalScale/LMFlow/blob/main/scripts/run_finetune_with_custom_optim.sh). It is important to note that the evaluations were conducted over a duration of 0.1 epochs to provide a preliminary insight into the optimizers' effectiveness.\n\n  | Optimizer Name | Train Loss |\n  |----------------|------------|\n  | RMSprop        | 2.4016     |\n  | LION-32bit     | 2.4041     |\n  | Adam           | 2.4292     |\n  | AdamP          | 2.4295     |\n  | AdamW          | 2.4469     |\n  | AdaFactor      | 2.4543     |\n  | AdaBound       | 2.4547     |\n  | AdamWScheduleFree       | 2.4677     |\n  | Adan           | 2.5063     |\n  | NAdam          | 2.5569     |\n  | AdaBelief      | 2.5857     |\n  | AdaMax         | 2.5924     |\n  | RAdam          | 2.6104     |\n  | AdaDelta       | 2.6298     |\n  | AdaGrad        | 2.8657     |\n  | Yogi           | 2.9314     |\n  | NovoGrad       | 3.1071     |\n  | Sophia         | 3.1517     |\n  | LAMB           | 3.2350     |\n  | LARS           | 3.3329     |\n  | SGDScheduleFree        | 3.3541     |\n  | SGDP           | 3.3567     |\n  | SGD            | 3.3734     |\n\n\u003c/details\u003e\n\n## Support\n\nIf you need any help, please submit a Github issue.\n\n## License\n\nThe code included in this project is licensed under the [Apache 2.0 license](https://github.com/OptimalScale/LMFlow/blob/main/LICENSE).\nIf you wish to use the codes and models included in this project for commercial purposes, please sign this [document](https://docs.google.com/forms/d/e/1FAIpQLSfJYcci6cbgpIvx_Fh1xDL6pNkzsjGDH1QIcm4cYk88K2tqkw/viewform?usp=pp_url) to obtain authorization.\n\n## Citation\n\nIf you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420):\n\n```citation\n@article{diao2023lmflow,\n  title={Lmflow: An extensible toolkit for finetuning and inference of large foundation models},\n  author={Diao, Shizhe and Pan, Rui and Dong, Hanze and Shum, Ka Shun and Zhang, Jipeng and Xiong, Wei and Zhang, Tong},\n  journal={arXiv preprint arXiv:2306.12420},\n  year={2023}\n}\n```\n\n```citation\n@article{dong2023raft,\n  title={Raft: Reward ranked finetuning for generative foundation model alignment},\n  author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong},\n  journal={arXiv preprint arXiv:2304.06767},\n  year={2023}\n}\n```\n\n```citation\n@article{pan2024lisa,\n  title={LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning}, \n  author={Pan, Rui and Liu, Xiang and Diao, Shizhe and Pi, Renjie and Zhang, Jipeng and Han, Chi and Zhang, Tong},\n  journal={arXiv preprint arXiv:2403.17919},\n  year={2024}\n}\n```\n","funding_links":[],"categories":["LLMs and ChatGPT","Fine Tuning"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/optimalscale.github.io%2FLMFlow%2F","html_url":"https://awesome.ecosyste.ms/projects/optimalscale.github.io%2FLMFlow%2F","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/optimalscale.github.io%2FLMFlow%2F/lists"}