Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/replicate/cog
Containers for machine learning
https://github.com/replicate/cog
ai containers cuda deep-learning docker machine-learning pytorch tensorflow
Last synced: 5 days ago
JSON representation
Containers for machine learning
- Host: GitHub
- URL: https://github.com/replicate/cog
- Owner: replicate
- License: apache-2.0
- Created: 2021-02-26T23:43:09.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2024-12-23T18:44:39.000Z (19 days ago)
- Last Synced: 2024-12-30T10:17:09.415Z (12 days ago)
- Topics: ai, containers, cuda, deep-learning, docker, machine-learning, pytorch, tensorflow
- Language: Python
- Homepage: https://cog.run
- Size: 7.82 MB
- Stars: 8,232
- Watchers: 69
- Forks: 571
- Open Issues: 407
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- awesome-genai - COG - Containers for ML (Tools for Deployment)
- awesome-replicate - Cog - Containers for machine learning. (Open-source tools)
- awesome-mlops - Cog - Open-source tool that lets you package ML models in a standard, production-ready container. (Model Serving)
- awesome-ccamel - replicate/cog - Containers for machine learning (Python)
- StarryDivineSky - replicate/cog
README
# Cog: Containers for machine learning
Cog is an open-source tool that lets you package machine learning models in a standard, production-ready container.
You can deploy your packaged model to your own infrastructure, or to [Replicate](https://replicate.com/).
## Highlights
- π¦ **Docker containers without the pain.** Writing your own `Dockerfile` can be a bewildering process. With Cog, you define your environment with a [simple configuration file](#how-it-works) and it generates a Docker image with all the best practices: Nvidia base images, efficient caching of dependencies, installing specific Python versions, sensible environment variable defaults, and so on.
- π€¬οΈ **No more CUDA hell.** Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will set it all up correctly for you.
- β **Define the inputs and outputs for your model with standard Python.** Then, Cog generates an OpenAPI schema and validates the inputs and outputs with Pydantic.
- π **Automatic HTTP prediction server**: Your model's types are used to dynamically generate a RESTful HTTP API using [FastAPI](https://fastapi.tiangolo.com/).
- π₯ **Automatic queue worker.** Long-running deep learning models or batch processing is best architected with a queue. Cog models do this out of the box. Redis is currently supported, with more in the pipeline.
- βοΈ **Cloud storage.** Files can be read and written directly to Amazon S3 and Google Cloud Storage. (Coming soon.)
- π **Ready for production.** Deploy your model anywhere that Docker images run. Your own infrastructure, or [Replicate](https://replicate.com).
## How it works
Define the Docker environment your model runs in with `cog.yaml`:
```yaml
build:
gpu: true
system_packages:
- "libgl1-mesa-glx"
- "libglib2.0-0"
python_version: "3.12"
python_packages:
- "torch==2.3"
predict: "predict.py:Predictor"
```Define how predictions are run on your model with `predict.py`:
```python
from cog import BasePredictor, Input, Path
import torchclass Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
self.model = torch.load("./weights.pth")# The arguments and types the model takes as input
def predict(self,
image: Path = Input(description="Grayscale input image")
) -> Path:
"""Run a single prediction on the model"""
processed_image = preprocess(image)
output = self.model(processed_image)
return postprocess(output)
```Now, you can run predictions on this model:
```console
$ cog predict -i [email protected]
--> Building Docker image...
--> Running Prediction...
--> Output written to output.jpg
```Or, build a Docker image for deployment:
```console
$ cog build -t my-colorization-model
--> Building Docker image...
--> Built my-colorization-model:latest$ docker run -d -p 5000:5000 --gpus all my-colorization-model
$ curl http://localhost:5000/predictions -X POST \
-H 'Content-Type: application/json' \
-d '{"input": {"image": "https://.../input.jpg"}}'
```Or, combine build and run via the `serve` command:
```console
$ cog serve -p 8080$ curl http://localhost:8080/predictions -X POST \
-H 'Content-Type: application/json' \
-d '{"input": {"image": "https://.../input.jpg"}}'
```## Why are we building this?
It's really hard for researchers to ship machine learning models to production.
Part of the solution is Docker, but it is so complex to get it to work: Dockerfiles, pre-/post-processing, Flask servers, CUDA versions. More often than not the researcher has to sit down with an engineer to get the damn thing deployed.
[Andreas](https://github.com/andreasjansson) and [Ben](https://github.com/bfirsh) created Cog. Andreas used to work at Spotify, where he built tools for building and deploying ML models with Docker. Ben worked at Docker, where he created [Docker Compose](https://github.com/docker/compose).
We realized that, in addition to Spotify, other companies were also using Docker to build and deploy machine learning models. [Uber](https://eng.uber.com/michelangelo-pyml/) and others have built similar systems. So, we're making an open source version so other people can do this too.
Hit us up if you're interested in using it or want to collaborate with us. [We're on Discord](https://discord.gg/replicate) or email us at [[email protected]](mailto:[email protected]).
## Prerequisites
- **macOS, Linux or Windows 11**. Cog works on macOS, Linux and Windows 11 with [WSL 2](docs/wsl2/wsl2.md)
- **Docker**. Cog uses Docker to create a container for your model. You'll need to [install Docker](https://docs.docker.com/get-docker/) before you can run Cog. If you install Docker Engine instead of Docker Desktop, you will need to [install Buildx](https://docs.docker.com/build/architecture/#buildx) as well.## Install
If you're using macOS, you can install Cog using Homebrew:
```console
brew install cog
```You can also download and install the latest release using our
[install script](https://cog.run/install):```sh
# fish shell
sh (curl -fsSL https://cog.run/install.sh | psub)# bash, zsh, and other shells
sh <(curl -fsSL https://cog.run/install.sh)# download with wget and run in a separate command
wget -qO- https://cog.run/install.sh
sh ./install.sh
```You can manually install the latest release of Cog directly from GitHub
by running the following commands in a terminal:```console
sudo curl -o /usr/local/bin/cog -L "https://github.com/replicate/cog/releases/latest/download/cog_$(uname -s)_$(uname -m)"
sudo chmod +x /usr/local/bin/cog
```Alternatively, you can build Cog from source and install it with these commands:
```console
make
sudo make install
```Or if you are on docker:
```
RUN sh -c "INSTALL_DIR=\"/usr/local/bin\" SUDO=\"\" $(curl -fsSL https://cog.run/install.sh)"
```## Upgrade
If you're using macOS and you previously installed Cog with Homebrew, run the following:
```console
brew upgrade cog
```Otherwise, you can upgrade to the latest version by running the same commands you used to install it.
## Next steps
- [Get started with an example model](docs/getting-started.md)
- [Get started with your own model](docs/getting-started-own-model.md)
- [Using Cog with notebooks](docs/notebooks.md)
- [Using Cog with Windows 11](docs/wsl2/wsl2.md)
- [Take a look at some examples of using Cog](https://github.com/replicate/cog-examples)
- [Deploy models with Cog](docs/deploy.md)
- [`cog.yaml` reference](docs/yaml.md) to learn how to define your model's environment
- [Prediction interface reference](docs/python.md) to learn how the `Predictor` interface works
- [Training interface reference](docs/training.md) to learn how to add a fine-tuning API to your model
- [HTTP API reference](docs/http.md) to learn how to use the HTTP API that models serve## Need help?
[Join us in #cog on Discord.](https://discord.gg/replicate)
## Contributors β¨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
Ben Firshman
π» π
Andreas Jansson
π» π π§
Zeke Sikelianos
π» π π§
Rory Byrne
π» π β οΈ
Michael Floering
π» π π€
Ben Evans
π
shashank agarwal
π» π
VictorXLR
π» π β οΈ
hung anna
π
Brian Whitman
π
JimothyJohn
π
ericguizzo
π
Dominic Baggott
π» β οΈ
Dashiell Stander
π π» β οΈ
Shuwei Liang
π π¬
Eric Allam
π€
IvΓ‘n Perdomo
π
Charles Frye
π
Luan Pham
π π
TommyDew
π»
Jesse Andrews
π» π β οΈ
Nick Stenning
π» π π¨ π β οΈ
Justin Merrell
π
Rurik YlΓ€-Onnenvuori
π
Youka
π
Clay Mullis
π
Mattt
π» π π
Eng Zer Jun
β οΈ
BB
π»
williamluer
π
Simon Eskildsen
π»
F
π π»
Philip Potter
π π»
Joanne Chen
π
technillogue
π»
Aron Carroll
π π» π€
Bohdan Mykhailenko
π π
Daniel Radu
π π
Itay Etelis
π»
Gennaro Schiano
π
AndrΓ© KnΓΆrig
π
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!