Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/google/gemma_pytorch
The official PyTorch implementation of Google's Gemma models
https://github.com/google/gemma_pytorch
gemma google pytorch
Last synced: 3 days ago
JSON representation
The official PyTorch implementation of Google's Gemma models
- Host: GitHub
- URL: https://github.com/google/gemma_pytorch
- Owner: google
- License: apache-2.0
- Created: 2024-02-20T17:53:21.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-07-31T03:01:58.000Z (5 months ago)
- Last Synced: 2024-12-03T10:04:49.012Z (10 days ago)
- Topics: gemma, google, pytorch
- Language: Python
- Homepage: https://ai.google.dev/gemma
- Size: 2.11 MB
- Stars: 5,309
- Watchers: 39
- Forks: 512
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- ai-game-devtools - Gemma - of-the art open models built from research and technology used to create Google Gemini models. | | | Tool | (Project List / <span id="tool">Tool (AI LLM)</span>)
- StarryDivineSky - google/gemma_pytorch
- AiTreasureBox - google/gemma_pytorch - 12-07_5312_0](https://img.shields.io/github/stars/google/gemma_pytorch.svg)|The official PyTorch implementation of Google's Gemma models| (Repos)
- awesome-llm-and-aigc - Gemma
README
# Gemma in PyTorch
**Gemma** is a family of lightweight, state-of-the art open models built from research and technology used to create Google Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. For more details, please check out the following links:
* [Gemma on Google AI](https://ai.google.dev/gemma)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex AI Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335)This is the official PyTorch implementation of Gemma models. We provide model and inference implementations using both PyTorch and PyTorch/XLA, and support running inference on CPU, GPU and TPU.
## Updates
* [June 26th 🔥] Support Gemma v2. You can find the checkpoints [on Kaggle](https://www.kaggle.com/models/google/gemma-2/pytorch) and Hugging Face
* [April 9th] Support CodeGemma. You can find the checkpoints [on Kaggle](https://www.kaggle.com/models/google/codegemma/pytorch) and [Hugging Face](https://huggingface.co/collections/google/codegemma-release-66152ac7b683e2667abdee11)
* [April 5] Support Gemma v1.1. You can find the v1.1 checkpoints [on Kaggle](https://www.kaggle.com/models/google/gemma/frameworks/pyTorch) and [Hugging Face](https://huggingface.co/collections/google/gemma-release-65d5efbccdbb8c4202ec078b).
## Download Gemma model checkpoint
You can find the model checkpoints on Kaggle
[here](https://www.kaggle.com/models/google/gemma/frameworks/pyTorch).Alternatively, you can find the model checkpoints on the Hugging Face Hub [here](https://huggingface.co/models?other=gemma_torch). To download the models, go the the model repository of the model of interest and click the `Files and versions` tab, and download the model and tokenizer files. For programmatic downloading, if you have `huggingface_hub`
installed, you can also run:```
huggingface-cli download google/gemma-7b-it-pytorch
```Note that you can choose between the 2B, 2B V2, 7B, 7B int8 quantized, 9B, and 27B variants.
```
VARIANT=<2b or 7b or 9b or 27b>
CKPT_PATH=
```## Try it free on Colab
Follow the steps at
[https://ai.google.dev/gemma/docs/pytorch_gemma](https://ai.google.dev/gemma/docs/pytorch_gemma).## Try it out with PyTorch
Prerequisite: make sure you have setup docker permission properly as a non-root user.
```bash
sudo usermod -aG docker $USER
newgrp docker
```### Build the docker image.
```bash
DOCKER_URI=gemma:${USER}docker build -f docker/Dockerfile ./ -t ${DOCKER_URI}
```### Run Gemma inference on CPU.
```bash
PROMPT="The meaning of life is"docker run -t --rm \
-v ${CKPT_PATH}:/tmp/ckpt \
${DOCKER_URI} \
python scripts/run.py \
--ckpt=/tmp/ckpt \
--variant="${VARIANT}" \
--prompt="${PROMPT}"
# add `--quant` for the int8 quantized model.
```### Run Gemma inference on GPU.
```bash
PROMPT="The meaning of life is"docker run -t --rm \
--gpus all \
-v ${CKPT_PATH}:/tmp/ckpt \
${DOCKER_URI} \
python scripts/run.py \
--device=cuda \
--ckpt=/tmp/ckpt \
--variant="${VARIANT}" \
--prompt="${PROMPT}"
# add `--quant` for the int8 quantized model.
```## Try It out with PyTorch/XLA
### Build the docker image (CPU, TPU).
```bash
DOCKER_URI=gemma_xla:${USER}docker build -f docker/xla.Dockerfile ./ -t ${DOCKER_URI}
```### Build the docker image (GPU).
```bash
DOCKER_URI=gemma_xla_gpu:${USER}docker build -f docker/xla_gpu.Dockerfile ./ -t ${DOCKER_URI}
```### Run Gemma inference on CPU.
```bash
docker run -t --rm \
--shm-size 4gb \
-e PJRT_DEVICE=CPU \
-v ${CKPT_PATH}:/tmp/ckpt \
${DOCKER_URI} \
python scripts/run_xla.py \
--ckpt=/tmp/ckpt \
--variant="${VARIANT}" \
# add `--quant` for the int8 quantized model.
```### Run Gemma inference on TPU.
Note: be sure to use the docker container built from `xla.Dockerfile`.
```bash
docker run -t --rm \
--shm-size 4gb \
-e PJRT_DEVICE=TPU \
-v ${CKPT_PATH}:/tmp/ckpt \
${DOCKER_URI} \
python scripts/run_xla.py \
--ckpt=/tmp/ckpt \
--variant="${VARIANT}" \
# add `--quant` for the int8 quantized model.
```### Run Gemma inference on GPU.
Note: be sure to use the docker container built from `xla_gpu.Dockerfile`.
```bash
docker run -t --rm --privileged \
--shm-size=16g --net=host --gpus all \
-e USE_CUDA=1 \
-e PJRT_DEVICE=CUDA \
-v ${CKPT_PATH}:/tmp/ckpt \
${DOCKER_URI} \
python scripts/run_xla.py \
--ckpt=/tmp/ckpt \
--variant="${VARIANT}" \
# add `--quant` for the int8 quantized model.
```### Tokenizer Notes
99 unused tokens are reserved in the pretrained tokenizer model to assist with more efficient training/fine-tuning. Unused tokens are in the string format of `` with token id range of `[7-105]`.
```
"": 7,
"": 8,
"": 9,
...
"": 105,
```## Disclaimer
This is not an officially supported Google product.