{"id":13472462,"url":"https://github.com/unslothai/unsloth","last_synced_at":"2026-04-01T20:24:04.997Z","repository":{"id":209893136,"uuid":"725205304","full_name":"unslothai/unsloth","owner":"unslothai","description":"Finetune Qwen3, Llama 4, TTS, DeepSeek-R1 \u0026 Gemma 3 LLMs 2x faster with 70% less memory! 🦥","archived":false,"fork":false,"pushed_at":"2025-05-05T01:06:41.000Z","size":6773,"stargazers_count":38080,"open_issues_count":1045,"forks_count":2982,"subscribers_count":224,"default_branch":"main","last_synced_at":"2025-05-05T14:09:19.637Z","etag":null,"topics":["deepseek","deepseek-r1","fine-tuning","finetuning","gemma","gemma3","llama","llama-4","llama3","llama4","llm","llms","lora","mistral","qlora","qwen","qwen3","text-to-speech","tts","unsloth"],"latest_commit_sha":null,"homepage":"https://unsloth.ai","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/unslothai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":null,"patreon":null,"open_collective":null,"ko_fi":"unsloth","tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"lfx_crowdfunding":null,"custom":null}},"created_at":"2023-11-29T16:50:09.000Z","updated_at":"2025-05-05T14:02:35.000Z","dependencies_parsed_at":"2024-03-14T10:27:45.852Z","dependency_job_id":"039ebe64-72b8-4dea-bd9a-26368e2eb8f5","html_url":"https://github.com/unslothai/unsloth","commit_stats":{"total_commits":530,"total_committers":13,"mean_commits":40.76923076923077,"dds":0.03584905660377358,"last_synced_commit":"9ca13b836f647e67d6e9ca8bb712403ffaadd607"},"previous_names":["unslothai/unsloth"],"tags_count":20,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unslothai%2Funsloth","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unslothai%2Funsloth/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unslothai%2Funsloth/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/unslothai%2Funsloth/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/unslothai","download_url":"https://codeload.github.com/unslothai/unsloth/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252542116,"owners_count":21764910,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deepseek","deepseek-r1","fine-tuning","finetuning","gemma","gemma3","llama","llama-4","llama3","llama4","llm","llms","lora","mistral","qlora","qwen","qwen3","text-to-speech","tts","unsloth"],"created_at":"2024-07-31T16:00:54.856Z","updated_at":"2026-04-01T20:24:04.988Z","avatar_url":"https://github.com/unslothai.png","language":"Python","readme":"\u003ch1 align=\"center\" style=\"margin:0;\"\u003e\n  \u003ca href=\"https://unsloth.ai/docs\"\u003e\u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://raw.githubusercontent.com/unslothai/unsloth/main/images/STUDIO%20WHITE%20LOGO.png\"\u003e\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://raw.githubusercontent.com/unslothai/unsloth/main/images/STUDIO%20BLACK%20LOGO.png\"\u003e\n    \u003cimg alt=\"Unsloth logo\" src=\"https://raw.githubusercontent.com/unslothai/unsloth/main/images/STUDIO%20BLACK%20LOGO.png\" height=\"60\" style=\"max-width:100%;\"\u003e\n  \u003c/picture\u003e\u003c/a\u003e\n\u003c/h1\u003e\n\u003ch3 align=\"center\" style=\"margin: 0; margin-top: 0;\"\u003e\nRun and train AI models with a unified local interface.\n\u003c/h3\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#-features\"\u003eFeatures\u003c/a\u003e •\n  \u003ca href=\"#-quickstart\"\u003eQuickstart\u003c/a\u003e •\n  \u003ca href=\"#-free-notebooks\"\u003eNotebooks\u003c/a\u003e •\n  \u003ca href=\"https://unsloth.ai/docs\"\u003eDocumentation\u003c/a\u003e •\n  \u003ca href=\"https://www.reddit.com/r/unsloth/\"\u003eReddit\u003c/a\u003e\n\u003c/p\u003e\n \u003ca href=\"https://unsloth.ai/docs/new/studio\"\u003e\n\u003cimg alt=\"unsloth studio ui homepage\" src=\"https://raw.githubusercontent.com/unslothai/unsloth/main/studio/frontend/public/studio%20github%20landscape%20colab%20display.png\" style=\"max-width: 100%; margin-bottom: 0;\"\u003e\u003c/a\u003e\n\nUnsloth Studio (Beta) lets you run and train text, [audio](https://unsloth.ai/docs/basics/text-to-speech-tts-fine-tuning), [embedding](https://unsloth.ai/docs/new/embedding-finetuning), [vision](https://unsloth.ai/docs/basics/vision-fine-tuning) models on Windows, Linux and macOS.\n\n## ⭐ Features\nUnsloth provides several key features for both inference and training:\n### Inference\n* **Search + download + run models** including GGUF, LoRA adapters, safetensors\n* **Export models**: [Save or export](https://unsloth.ai/docs/new/studio/export) models to GGUF, 16-bit safetensors and other formats.\n* **Tool calling**: Support for [self-healing tool calling](https://unsloth.ai/docs/new/studio/chat#auto-healing-tool-calling) and web search\n* **[Code execution](https://unsloth.ai/docs/new/studio/chat#code-execution)**: lets LLMs test code in Claude artifacts and sandbox environments\n* [Auto-tune inference parameters](https://unsloth.ai/docs/new/studio/chat#auto-parameter-tuning) and customize chat templates.\n* We work directly with teams behind [gpt-oss](https://docs.unsloth.ai/new/gpt-oss-how-to-run-and-fine-tune#unsloth-fixes-for-gpt-oss), [Qwen3](https://www.reddit.com/r/LocalLLaMA/comments/1kaodxu/qwen3_unsloth_dynamic_ggufs_128k_context_bug_fixes/), [Llama 4](https://github.com/ggml-org/llama.cpp/pull/12889), [Mistral](models/tutorials/devstral-how-to-run-and-fine-tune.md), [Gemma 1-3](https://news.ycombinator.com/item?id=39671146), and [Phi-4](https://unsloth.ai/blog/phi4), where we’ve fixed bugs that improve model accuracy.\n* Upload images, audio, PDFs, code, DOCX and more file types to chat with.\n### Training\n* Train and RL **500+ models** up to **2x faster** with up to **70% less VRAM**, with no accuracy loss.\n* Custom Triton and mathematical **kernels**. See some collabs we did with [PyTorch](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning) and [Hugging Face](https://unsloth.ai/docs/new/faster-moe).\n* **Data Recipes**: [Auto-create datasets](https://unsloth.ai/docs/new/studio/data-recipe) from **PDF, CSV, DOCX** etc. Edit data in a visual-node workflow.\n* **[Reinforcement Learning](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide)** (RL): The most efficient [RL](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide) library, using **80% less VRAM** for GRPO, [FP8](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning) etc.\n* Supports full fine-tuning, RL, pretraining, 4-bit, 16-bit and, FP8 training.\n* **Observability**: Monitor training live, track loss and GPU usage and customize graphs.\n* [Multi-GPU](https://unsloth.ai/docs/basics/multi-gpu-training-with-unsloth) training is supported, with major improvements coming soon.\n\n## ⚡ Quickstart\nUnsloth can be used in two ways: through **[Unsloth Studio](https://unsloth.ai/docs/new/studio/)**, the web UI, or through **Unsloth Core**, the code-based version. Each has different requirements.\n\n### Unsloth Studio (web UI)\nUnsloth Studio (Beta) works on **Windows, Linux, WSL** and **macOS**.\n\n* **CPU:** Supported for Chat and Data Recipes currently\n* **NVIDIA:** Training works on RTX 30/40/50, Blackwell, DGX Spark, Station and more\n* **macOS:** Currently supports chat and Data Recipes. **MLX training** is coming very soon\n* **AMD:** Chat + Data works. Train with [Unsloth Core](#unsloth-core-code-based). Studio support is out soon.\n* **Coming soon:** Training support for Apple MLX, AMD, and Intel.\n* **Multi-GPU:** Available now, with a major upgrade on the way\n\n#### macOS, Linux, WSL:\n```bash\ncurl -fsSL https://unsloth.ai/install.sh | sh\n```\n#### Windows:\n```powershell\nirm https://unsloth.ai/install.ps1 | iex\n```\n\n#### Launch\n```bash\nunsloth studio -H 0.0.0.0 -p 8888\n```\n\n#### Update\nTo update, use the same install commands as above. Or run (does not work on Windows):\n```bash\nunsloth studio update\n```\n\n#### Docker\nUse our [Docker image](https://hub.docker.com/r/unsloth/unsloth) ```unsloth/unsloth``` container. Run:\n```bash\ndocker run -d -e JUPYTER_PASSWORD=\"mypassword\" \\\n  -p 8888:8888 -p 8000:8000 -p 2222:22 \\\n  -v $(pwd)/work:/workspace/work \\\n  --gpus all \\\n  unsloth/unsloth\n  ```\n\n#### Developer, Nightly, Uninstall\nTo see developer, nightly and uninstallation etc. instructions, see [advanced installation](#-advanced-installation).\n\n### Unsloth Core (code-based)\n#### Linux, WSL:\n```bash\ncurl -LsSf https://astral.sh/uv/install.sh | sh\nuv venv unsloth_env --python 3.13\nsource unsloth_env/bin/activate\nuv pip install unsloth --torch-backend=auto\n```\n#### Windows:\n```powershell\nwinget install -e --id Python.Python.3.13\nwinget install --id=astral-sh.uv  -e\nuv venv unsloth_env --python 3.13\n.\\unsloth_env\\Scripts\\activate\nuv pip install unsloth --torch-backend=auto\n```\nFor Windows, `pip install unsloth` works only if you have PyTorch installed. Read our [Windows Guide](https://unsloth.ai/docs/get-started/install/windows-installation).\nYou can use the same Docker image as Unsloth Studio.\n\n#### AMD, Intel:\nFor RTX 50x, B200, 6000 GPUs: `uv pip install unsloth --torch-backend=auto`. Read our guides for: [Blackwell](https://unsloth.ai/docs/blog/fine-tuning-llms-with-blackwell-rtx-50-series-and-unsloth) and [DGX Spark](https://unsloth.ai/docs/blog/fine-tuning-llms-with-nvidia-dgx-spark-and-unsloth). \u003cbr\u003e\nTo install Unsloth on **AMD** and **Intel** GPUs, follow our [AMD Guide](https://unsloth.ai/docs/get-started/install/amd) and [Intel Guide](https://unsloth.ai/docs/get-started/install/intel).\n\n## ✨ Free Notebooks\n\nTrain for free with our notebooks. Read our [guide](https://unsloth.ai/docs/get-started/fine-tuning-llms-guide). Add dataset, run, then deploy your trained model.\n\n| Model | Free Notebooks | Performance | Memory use |\n|-----------|---------|--------|----------|\n| **Qwen3.5 (4B)**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_5_(4B)_Vision.ipynb)               | 1.5x faster | 60% less |\n| **gpt-oss (20B)**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt-oss-(20B)-Fine-tuning.ipynb)               | 2x faster | 70% less |\n| **Qwen3.5 GSPO**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_5_(4B)_Vision_GRPO.ipynb)               | 2x faster | 70% less |\n| **gpt-oss (20B): GRPO**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt-oss-(20B)-GRPO.ipynb)               | 2x faster | 80% less |\n| **Qwen3: Advanced GRPO**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(4B)-GRPO.ipynb)               | 2x faster | 70% less |\n| **Gemma 3 (4B) Vision** | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B)-Vision.ipynb)               | 1.7x faster | 60% less |\n| **embeddinggemma (300M)**    | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/EmbeddingGemma_(300M).ipynb)               | 2x faster | 20% less |\n| **Mistral Ministral 3 (3B)**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Ministral_3_VL_(3B)_Vision.ipynb)               | 1.5x faster | 60% less |\n| **Llama 3.1 (8B) Alpaca**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb)               | 2x faster | 70% less |\n| **Llama 3.2 Conversational**      | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb)               | 2x faster | 70% less |\n| **Orpheus-TTS (3B)**     | [▶️ Start for free](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Orpheus_(3B)-TTS.ipynb)               | 1.5x faster | 50% less |\n\n- See all our notebooks for: [Kaggle](https://github.com/unslothai/notebooks?tab=readme-ov-file#-kaggle-notebooks), [GRPO](https://unsloth.ai/docs/get-started/unsloth-notebooks#grpo-reasoning-rl-notebooks), [TTS](https://unsloth.ai/docs/get-started/unsloth-notebooks#text-to-speech-tts-notebooks), [embedding](https://unsloth.ai/docs/new/embedding-finetuning) \u0026 [Vision](https://unsloth.ai/docs/get-started/unsloth-notebooks#vision-multimodal-notebooks)\n- See [all our models](https://unsloth.ai/docs/get-started/unsloth-model-catalog) and [all our notebooks](https://unsloth.ai/docs/get-started/unsloth-notebooks)\n- See detailed documentation for Unsloth [here](https://unsloth.ai/docs)\n\n## 🦥 Unsloth News\n- **Introducing Unsloth Studio**: our new web UI for running and training LLMs. [Blog](https://unsloth.ai/docs/new/studio)\n- **Qwen3.5** - 0.8B, 2B, 4B, 9B, 27B, 35-A3B, 112B-A10B are now supported. [Guide + notebooks](https://unsloth.ai/docs/models/qwen3.5/fine-tune)\n- Train **MoE LLMs 12x faster** with 35% less VRAM - DeepSeek, GLM, Qwen and gpt-oss. [Blog](https://unsloth.ai/docs/new/faster-moe)\n- **Embedding models**: Unsloth now supports ~1.8-3.3x faster embedding fine-tuning. [Blog](https://unsloth.ai/docs/new/embedding-finetuning) • [Notebooks](https://unsloth.ai/docs/get-started/unsloth-notebooks#embedding-models)\n- New **7x longer context RL** vs. all other setups, via our new batching algorithms. [Blog](https://unsloth.ai/docs/new/grpo-long-context)\n- New RoPE \u0026 MLP **Triton Kernels** \u0026 **Padding Free + Packing**: 3x faster training \u0026 30% less VRAM. [Blog](https://unsloth.ai/docs/new/3x-faster-training-packing)\n- **500K Context**: Training a 20B model with \u003e500K context is now possible on an 80GB GPU. [Blog](https://unsloth.ai/docs/blog/500k-context-length-fine-tuning)\n- **FP8 \u0026 Vision RL**: You can now do FP8 \u0026 VLM GRPO on consumer GPUs. [FP8 Blog](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning) • [Vision RL](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/vision-reinforcement-learning-vlm-rl)\n- **gpt-oss** by OpenAI: Read our [RL blog](https://unsloth.ai/docs/models/gpt-oss-how-to-run-and-fine-tune/gpt-oss-reinforcement-learning), [Flex Attention](https://unsloth.ai/docs/models/gpt-oss-how-to-run-and-fine-tune/long-context-gpt-oss-training) blog and [Guide](https://unsloth.ai/docs/models/gpt-oss-how-to-run-and-fine-tune).\n\n## 📥 Advanced Installation\nThe below advanced instructions are for Unsloth Studio. For Unsloth Core advanced installation, [view our docs](https://unsloth.ai/docs/get-started/install/pip-install#advanced-pip-installation).\n#### Developer installs: macOS, Linux, WSL:\n```bash\ngit clone https://github.com/unslothai/unsloth\ncd unsloth\n./install.sh --local\nunsloth studio -H 0.0.0.0 -p 8888\n```\nThen to update :\n```bash\nunsloth studio update\n```\n\n#### Developer installs: Windows PowerShell:\n```powershell\ngit clone https://github.com/unslothai/unsloth.git\ncd unsloth\nSet-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass\n.\\install.ps1 --local\nunsloth studio -H 0.0.0.0 -p 8888\n```\nThen to update :\n```bash\nunsloth studio update\n```\n\n#### Nightly: MacOS, Linux, WSL:\n```bash\ngit clone https://github.com/unslothai/unsloth\ncd unsloth\ngit checkout nightly\n./install.sh --local\nunsloth studio -H 0.0.0.0 -p 8888\n```\nThen to launch every time:\n```bash\nunsloth studio -H 0.0.0.0 -p 8888\n```\n\n#### Nightly: Windows:\nRun in Windows Powershell:\n```bash\ngit clone https://github.com/unslothai/unsloth.git\ncd unsloth\ngit checkout nightly\nSet-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass\n.\\install.ps1 --local\nunsloth studio -H 0.0.0.0 -p 8888\n```\nThen to launch every time:\n```bash\nunsloth studio -H 0.0.0.0 -p 8888\n```\n\n#### Uninstall\nYou can uninstall Unsloth Studio by deleting its install folder usually located under `$HOME/.unsloth/studio` on Mac/Linux/WSL and `%USERPROFILE%\\.unsloth\\studio` on Windows. Using the `rm -rf` commands will **delete everything**, including your history, cache:\n\n* ​ **MacOS, WSL, Linux:** `rm -rf ~/.unsloth/studio`\n* ​ **Windows (PowerShell):** `Remove-Item -Recurse -Force \"$HOME\\.unsloth\\studio\"`\n\nFor more info, [see our docs](https://unsloth.ai/docs/new/studio/install#uninstall).\n\n#### Deleting model files\n\nYou can delete old model files either from the bin icon in model search or by removing the relevant cached model folder from the default Hugging Face cache directory. By default, HF uses:\n\n* ​ **MacOS, Linux, WSL:** `~/.cache/huggingface/hub/`\n* ​ **Windows:** `%USERPROFILE%\\.cache\\huggingface\\hub\\`\n\n## 💚 Community and Links\n| Type                                                                                                                                      | Links                                                                          |\n| ----------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ |\n| \u003cimg width=\"16\" src=\"https://cdn.prod.website-files.com/6257adef93867e50d84d30e2/66e3d80db9971f10a9757c99_Symbol.svg\" /\u003e  **Discord**                       | [Join Discord server](https://discord.com/invite/unsloth)                          |\n| \u003cimg width=\"15\" src=\"https://redditinc.com/hs-fs/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" /\u003e  **r/unsloth Reddit**                       | [Join Reddit community](https://reddit.com/r/unsloth)                          |\n| 📚 **Documentation \u0026 Wiki**                                                                                                               | [Read Our Docs](https://unsloth.ai/docs)                                       |\n| \u003cimg width=\"13\" src=\"https://upload.wikimedia.org/wikipedia/commons/0/09/X_(formerly_Twitter)_logo_late_2025.svg\" /\u003e  **Twitter (aka X)** | [Follow us on X](https://twitter.com/unslothai)                                |\n| 🔮 **Our Models**                                                                                                                         | [Unsloth Catalog](https://unsloth.ai/docs/get-started/unsloth-model-catalog)   |\n| ✍️ **Blog**                                                                                                                               | [Read our Blogs](https://unsloth.ai/blog)                                      |\n\n### Citation\n\nYou can cite the Unsloth repo as follows:\n```bibtex\n@software{unsloth,\n  author = {Daniel Han, Michael Han and Unsloth team},\n  title = {Unsloth},\n  url = {https://github.com/unslothai/unsloth},\n  year = {2023}\n}\n```\nIf you trained a model with 🦥Unsloth, you can use this cool sticker!   \u003cimg src=\"https://raw.githubusercontent.com/unslothai/unsloth/main/images/made with unsloth.png\" width=\"200\" align=\"center\" /\u003e\n\n### License\nUnsloth uses a dual-licensing model of Apache 2.0 and AGPL-3.0. The core Unsloth package remains licensed under **[Apache 2.0](https://github.com/unslothai/unsloth?tab=Apache-2.0-1-ov-file)**, while certain optional components, such as the Unsloth Studio UI are licensed under the open-source license **[AGPL-3.0](https://github.com/unslothai/unsloth?tab=AGPL-3.0-2-ov-file)**.\n\nThis structure helps support ongoing Unsloth development while keeping the project open source and enabling the broader ecosystem to continue growing.\n\n### Thank You to\n- The [llama.cpp library](https://github.com/ggml-org/llama.cpp) that lets users run and save models with Unsloth\n- The Hugging Face team and their libraries: [transformers](https://github.com/huggingface/transformers) and [TRL](https://github.com/huggingface/trl)\n- The Pytorch and [Torch AO](https://github.com/unslothai/unsloth/pull/3391) team for their contributions\n- NVIDIA for their [NeMo DataDesigner](https://github.com/NVIDIA-NeMo/DataDesigner) library and their contributions\n- And of course for every single person who has contributed or has used Unsloth!\n","funding_links":["https://ko-fi.com/unsloth"],"categories":["Python","🧠 **CORE AI/ML MASTERY**","🎛 Fine-tuning Platforms","Summary","A01_文本生成_文本对话","\u003cimg src=\"./assets/satellite.svg\" width=\"16\" height=\"16\" style=\"vertical-align: middle;\"\u003e Satellites","NLP","Fine-Tuning \u0026 Training","LLM","LLM Applications","微调 Fine-Tuning","📋 List of Open-Source Projects","Fine-tuning","Model Training and Orchestration","\u003ca id=\"tools\"\u003e\u003c/a\u003e🛠️ Tools","🔓 Open Source Inference Engines","Inference platforms","语言资源库","Advanced Topics","LLM Training Frameworks","ai","HarmonyOS","Repos","Training","Agent Integration \u0026 Deployment Tools","🛠️ AI 工具与框架","其他相关论文","Fine-tuning \u0026 Quantization (18)","\u003ca name=\"Python\"\u003e\u003c/a\u003ePython","🧠 AI Applications \u0026 Platforms","Training \u0026 Fine-Tuning","LLM Training / Finetuning","AI and Agents","LLMs Framework","Language Models","LLM Frameworks \u0026 Libraries"],"sub_categories":["**Transformers \u0026 Large Language Models**","Training Frameworks","大语言对话模型及数据","Fine-Tuning Frameworks","LLM Infra and Optimization","3. Pretraining","Bleeding Edge ⚗️","python","Fine-Tuning \u0026 Training","Windows Manager","FineTune","AI Developer Toolkit","模型微调","Tools","Fine-Tuning Tools","Fine-tuning"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Funslothai%2Funsloth","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Funslothai%2Funsloth","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Funslothai%2Funsloth/lists"}