Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/camenduru/text-generation-webui-colab

A colab gradio web UI for running Large Language Models
https://github.com/camenduru/text-generation-webui-colab

alpaca colab colab-notebook colaboratory gradio koala lama llama llamas llm vicuna

Last synced: about 1 month ago
JSON representation

A colab gradio web UI for running Large Language Models

Awesome Lists containing this project

README

        

🐣 Please follow me for new updates https://twitter.com/camenduru

🔥 Please join our discord server https://discord.gg/k5BwmmvJJU

🥳 Please join my patreon community https://patreon.com/camenduru

## 🚦 WIP 🚦

## 🦒 Colab
| colab | Info - Model Page
| --- | --- |
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/vicuna-13b-GPTQ-4bit-128g.ipynb) | vicuna-13b-GPTQ-4bit-128g
https://vicuna.lmsys.org
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/vicuna-13B-1.1-GPTQ-4bit-128g.ipynb) | vicuna-13B-1.1-GPTQ-4bit-128g
https://vicuna.lmsys.org
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/stable-vicuna-13B-GPTQ-4bit-128g.ipynb) | stable-vicuna-13B-GPTQ-4bit-128g
https://huggingface.co/CarperAI/stable-vicuna-13b-delta
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/gpt4-x-alpaca-13b-native-4bit-128g.ipynb) | gpt4-x-alpaca-13b-native-4bit-128g
https://huggingface.co/chavinlo/gpt4-x-alpaca
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/pyg-7b-GPTQ-4bit-128g.ipynb) | pyg-7b-GPTQ-4bit-128g
https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/koala-13B-GPTQ-4bit-128g.ipynb) | koala-13B-GPTQ-4bit-128g
https://bair.berkeley.edu/blog/2023/04/03/koala
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/oasst-llama13b-GPTQ-4bit-128g.ipynb) | oasst-llama13b-GPTQ-4bit-128g
https://open-assistant.io
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/wizard-lm-uncensored-7b-GPTQ-4bit-128g.ipynb) | wizard-lm-uncensored-7b-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/mpt-storywriter-7b-GPTQ-4bit-128g.ipynb) | mpt-storywriter-7b-GPTQ-4bit-128g
https://www.mosaicml.com
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/wizard-lm-uncensored-13b-GPTQ-4bit-128g.ipynb) | wizard-lm-uncensored-13b-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/pyg-13b-GPTQ-4bit-128g.ipynb) | pyg-13b-GPTQ-4bit-128g
https://huggingface.co/PygmalionAI/pygmalion-13b
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/falcon-7b-instruct-GPTQ-4bit.ipynb) | falcon-7b-instruct-GPTQ-4bit
https://falconllm.tii.ae/
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/wizard-lm-13b-1.1-GPTQ-4bit-128g.ipynb) | wizard-lm-13b-1.1-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/llama-2-7b-chat-GPTQ-4bit.ipynb) | llama-2-7b-chat-GPTQ-4bit (4bit)
https://ai.meta.com/llama/
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/llama-2-13b-chat-GPTQ-4bit.ipynb) | llama-2-13b-chat-GPTQ-4bit (4bit)
https://ai.meta.com/llama/
🚦 WIP 🚦 please try llama-2-13b-chat or llama-2-7b-chat or llama-2-7b-chat-GPTQ-4bit
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/llama-2-7b-chat.ipynb) | llama-2-7b-chat (16bit)
https://ai.meta.com/llama/
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/llama-2-13b-chat.ipynb) | llama-2-13b-chat (8bit)
https://ai.meta.com/llama/
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/redmond-puffin-13b-GPTQ-4bit.ipynb) | redmond-puffin-13b-GPTQ-4bit (4bit)
https://huggingface.co/NousResearch/Redmond-Puffin-13B
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/stable-beluga-7b.ipynb) | stable-beluga-7b (16bit)
https://huggingface.co/stabilityai/StableBeluga-7B
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/doctor-gpt-7b.ipynb) | doctor-gpt-7b (16bit)
https://ai.meta.com/llama/ (https://github.com/llSourcell/DoctorGPT)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/code-llama-7b.ipynb) | code-llama-7b (16bit)
https://github.com/facebookresearch/codellama
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/code-llama-instruct-7b.ipynb) | code-llama-instruct-7b (16bit)
https://github.com/facebookresearch/codellama
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/code-llama-python-7b.ipynb) | code-llama-python-7b (16bit)
https://github.com/facebookresearch/codellama
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/mistral-7b-Instruct-v0.1-8bit.ipynb) | mistral-7b-Instruct-v0.1-8bit (8bit)
https://mistral.ai/
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/text-generation-webui-colab/blob/main/mytho-max-l2-13b-GPTQ.ipynb) | mytho-max-l2-13b-GPTQ (4bit)
https://huggingface.co/Gryphe/MythoMax-L2-13b

## 🦒 Colab Pro
According to the Facebook Research LLaMA license (Non-commercial bespoke license), maybe we cannot use this model with a Colab Pro account.
But Yann LeCun said "GPL v3" (https://twitter.com/ylecun/status/1629189925089296386) I am a little confused. Is it possible to use this with a non-free Colab Pro account?

## Tutorial
https://www.youtube.com/watch?v=kgA7eKU1XuA

#### ⚠ If you encounter an `IndexError: list index out of range` error, please set the models instruction template.
![Screenshot 2023-08-28 165206](https://github.com/camenduru/text-generation-webui-colab/assets/54370274/7f619737-eb3e-4368-9b03-65836d1207f0)

## Text Generation Web UI
[https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) (Thanks to @oobabooga ❤)

## Models License
| Model | License
| --- | --- |
vicuna-13b-GPTQ-4bit-128g | From https://vicuna.lmsys.org: The online demo is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us If you find any potential violation. The code is released under the Apache License 2.0.
gpt4-x-alpaca-13b-native-4bit-128g | https://huggingface.co/chavinlo/alpaca-native -> https://huggingface.co/chavinlo/alpaca-13b -> https://huggingface.co/chavinlo/gpt4-x-alpaca
llama-2 | https://ai.meta.com/llama/ Llama 2 is available for free for research and commercial use. 🥳

## Special Thanks
Thanks to facebookresearch ❤ for https://github.com/facebookresearch/llama

Thanks to lmsys ❤ for https://huggingface.co/lmsys/vicuna-13b-delta-v0

Thanks to anon8231489123 ❤ for https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/lmsys/vicuna-13b-delta-v0)

Thanks to tatsu-lab ❤ for https://github.com/tatsu-lab/stanford_alpaca

Thanks to chavinlo ❤ for https://huggingface.co/chavinlo/gpt4-x-alpaca

Thanks to qwopqwop200 ❤ for https://github.com/qwopqwop200/GPTQ-for-LLaMa

Thanks to tsumeone ❤ for https://huggingface.co/tsumeone/gpt4-x-alpaca-13b-native-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca)

Thanks to transformers ❤ for https://github.com/huggingface/transformers

Thanks to gradio-app ❤ for https://github.com/gradio-app/gradio

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ

Thanks to Neko-Institute-of-Science ❤ for https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b

Thanks to gozfarb ❤ for https://huggingface.co/gozfarb/pygmalion-7b-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b)

Thanks to young-geng ❤ for https://huggingface.co/young-geng/koala

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/young-geng/koala)

Thanks to dvruette ❤ for https://huggingface.co/dvruette/oasst-llama-13b-2-epochs

Thanks to gozfarb ❤ for https://huggingface.co/gozfarb/oasst-llama13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/dvruette/oasst-llama-13b-2-epochs)

Thanks to ehartford ❤ for https://huggingface.co/ehartford/WizardLM-7B-Uncensored

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-7B-Uncensored)

Thanks to mosaicml ❤ for https://huggingface.co/mosaicml/mpt-7b-storywriter

Thanks to OccamRazor ❤ for https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/mosaicml/mpt-7b-storywriter)

Thanks to ehartford ❤ for https://huggingface.co/ehartford/WizardLM-13B-Uncensored

Thanks to ausboss ❤ for https://huggingface.co/ausboss/WizardLM-13B-Uncensored-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-13B-Uncensored)

Thanks to PygmalionAI ❤ for https://huggingface.co/PygmalionAI/pygmalion-13b

Thanks to notstoic ❤ for https://huggingface.co/notstoic/pygmalion-13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/PygmalionAI/pygmalion-13b)

Thanks to WizardLM ❤ for https://huggingface.co/WizardLM/WizardLM-13B-V1.1

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/WizardLM/WizardLM-13B-V1.1)

Thanks to meta-llama ❤ for https://huggingface.co/meta-llama/Llama-2-7b-chat-hf

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)

Thanks to meta-llama ❤ for https://huggingface.co/meta-llama/Llama-2-13b-chat-hf

Thanks to localmodels ❤ for https://huggingface.co/localmodels/Llama-2-13B-Chat-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)

Thanks to NousResearch ❤ for https://huggingface.co/NousResearch/Redmond-Puffin-13B

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/NousResearch/Redmond-Puffin-13B)

Thanks to llSourcell ❤ for https://huggingface.co/llSourcell/medllama2_7b

Thanks to MetaAI ❤ for https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/CodeLlama-7B-fp16

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-fp16

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/CodeLlama-7B-Python-fp16

Thanks to MistralAI ❤ for https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1

Thanks to Gryphe ❤ for https://huggingface.co/Gryphe/MythoMax-L2-13b

Thanks to TheBloke ❤ for https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/Gryphe/MythoMax-L2-13b)

## Medical Advice Disclaimer
DISCLAIMER: THIS WEBSITE DOES NOT PROVIDE MEDICAL ADVICE
The information, including but not limited to, text, graphics, images and other material contained on this website are for informational purposes only. No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your physician or other qualified health care provider with any questions you may have regarding a medical condition or treatment and before undertaking a new health care regimen, and never disregard professional medical advice or delay in seeking it because of something you have read on this website.