Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/replicate/cog
Containers for machine learning
https://github.com/replicate/cog
ai containers cuda deep-learning docker machine-learning pytorch tensorflow
Last synced: 3 months ago
JSON representation
Containers for machine learning
- Host: GitHub
- URL: https://github.com/replicate/cog
- Owner: replicate
- License: apache-2.0
- Created: 2021-02-26T23:43:09.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2024-03-08T17:53:15.000Z (3 months ago)
- Last Synced: 2024-03-08T20:58:16.327Z (3 months ago)
- Topics: ai, containers, cuda, deep-learning, docker, machine-learning, pytorch, tensorflow
- Language: Python
- Homepage: https://cog.run
- Size: 4.14 MB
- Stars: 6,660
- Watchers: 64
- Forks: 449
- Open Issues: 380
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Lists
- awesome-mlops - Cog - Open-source tool that lets you package ML models in a standard, production-ready container. (Model Serving)
- dive-into-machine-learning - `cog`
- awesome-stars - cog
- awesome-stars - replicate/cog
- awesome-stars - cog
- awesome-stars - cog
- awesome-genai - COG - Containers for ML (Tools for Deployment)
- awesome - replicate/cog - Containers for machine learning (Python)
- awesome-stars - cog
- awesome-stars - replicate/cog - Containers for machine learning (Python)
- awesome-stars - replicate/cog - Containers for machine learning (Python)
- awesome-stars - replicate/cog - Containers for machine learning (Python)
- awesome-stars - replicate/cog - Containers for machine learning (Python)
- awesome-stars - cog
- awesome-stars - replicate/cog - Containers for machine learning (Python)
- awesome-stars - replicate/cog - Containers for machine learning (tensorflow)
- awesome-stars - replicate/cog - Containers for machine learning (Python)
- awesome-stars - cog
- awesome-stars - replicate/cog - `โ 7355` Containers for machine learning (Python)
- awesome-replicate - Cog - Containers for machine learning. (Open-source tools)
- awesome-stars - replicate/cog - `โ 6412` Containers for machine learning (Python)
- awesome-stars - replicate/cog - Containers for machine learning (Python)
README
# Cog: Containers for machine learning
Cog is an open-source tool that lets you package machine learning models in a standard, production-ready container.
You can deploy your packaged model to your own infrastructure, or to [Replicate](https://replicate.com/).
## Highlights
- ๐ฆ **Docker containers without the pain.** Writing your own `Dockerfile` can be a bewildering process. With Cog, you define your environment with a [simple configuration file](#how-it-works) and it generates a Docker image with all the best practices: Nvidia base images, efficient caching of dependencies, installing specific Python versions, sensible environment variable defaults, and so on.
- ๐คฌ๏ธ **No more CUDA hell.** Cog knows which CUDA/cuDNN/PyTorch/Tensorflow/Python combos are compatible and will set it all up correctly for you.
- โ **Define the inputs and outputs for your model with standard Python.** Then, Cog generates an OpenAPI schema and validates the inputs and outputs with Pydantic.
- ๐ **Automatic HTTP prediction server**: Your model's types are used to dynamically generate a RESTful HTTP API using [FastAPI](https://fastapi.tiangolo.com/).
- ๐ฅ **Automatic queue worker.** Long-running deep learning models or batch processing is best architected with a queue. Cog models do this out of the box. Redis is currently supported, with more in the pipeline.
- โ๏ธ **Cloud storage.** Files can be read and written directly to Amazon S3 and Google Cloud Storage. (Coming soon.)
- ๐ **Ready for production.** Deploy your model anywhere that Docker images run. Your own infrastructure, or [Replicate](https://replicate.com).
## How it works
Define the Docker environment your model runs in with `cog.yaml`:
```yaml
build:
gpu: true
system_packages:
- "libgl1-mesa-glx"
- "libglib2.0-0"
python_version: "3.11"
python_packages:
- "torch==1.8.1"
predict: "predict.py:Predictor"
```Define how predictions are run on your model with `predict.py`:
```python
from cog import BasePredictor, Input, Path
import torchclass Predictor(BasePredictor):
def setup(self):
"""Load the model into memory to make running multiple predictions efficient"""
self.model = torch.load("./weights.pth")# The arguments and types the model takes as input
def predict(self,
image: Path = Input(description="Grayscale input image")
) -> Path:
"""Run a single prediction on the model"""
processed_image = preprocess(image)
output = self.model(processed_image)
return postprocess(output)
```Now, you can run predictions on this model:
```console
$ cog predict -i [email protected]
--> Building Docker image...
--> Running Prediction...
--> Output written to output.jpg
```Or, build a Docker image for deployment:
```console
$ cog build -t my-colorization-model
--> Building Docker image...
--> Built my-colorization-model:latest$ docker run -d -p 5000:5000 --gpus all my-colorization-model
$ curl http://localhost:5000/predictions -X POST \
-H 'Content-Type: application/json' \
-d '{"input": {"image": "https://.../input.jpg"}}'
```## Why are we building this?
It's really hard for researchers to ship machine learning models to production.
Part of the solution is Docker, but it is so complex to get it to work: Dockerfiles, pre-/post-processing, Flask servers, CUDA versions. More often than not the researcher has to sit down with an engineer to get the damn thing deployed.
[Andreas](https://github.com/andreasjansson) and [Ben](https://github.com/bfirsh) created Cog. Andreas used to work at Spotify, where he built tools for building and deploying ML models with Docker. Ben worked at Docker, where he created [Docker Compose](https://github.com/docker/compose).
We realized that, in addition to Spotify, other companies were also using Docker to build and deploy machine learning models. [Uber](https://eng.uber.com/michelangelo-pyml/) and others have built similar systems. So, we're making an open source version so other people can do this too.
Hit us up if you're interested in using it or want to collaborate with us. [We're on Discord](https://discord.gg/replicate) or email us at [[email protected]](mailto:[email protected]).
## Prerequisites
- **macOS, Linux or Windows 11**. Cog works on macOS, Linux and Windows 11 with [WSL 2](docs/wsl2/wsl2.md)
- **Docker**. Cog uses Docker to create a container for your model. You'll need to [install Docker](https://docs.docker.com/get-docker/) before you can run Cog. If you install Docker Engine instead of Docker Desktop, you will need to [install Buildx](https://docs.docker.com/build/architecture/#buildx) as well.## Install
If you're using macOS, you can install Cog using Homebrew:
```console
brew install cog
```You can also download and install the latest release of Cog
directly from GitHub by running the following commands in a terminal:```console
sudo curl -o /usr/local/bin/cog -L "https://github.com/replicate/cog/releases/latest/download/cog_$(uname -s)_$(uname -m)"
sudo chmod +x /usr/local/bin/cog
```Alternatively, you can build Cog from source and install it with these commands:
```console
make
sudo make install
```## Upgrade
If you previously installed Cog from a GitHub Releases URL, you can upgrade to the latest version by running the same commands you used to install it:
```console
sudo curl -o /usr/local/bin/cog -L "https://github.com/replicate/cog/releases/latest/download/cog_$(uname -s)_$(uname -m)"
sudo chmod +x /usr/local/bin/cog
```If you're using macOS and you previously installed Cog with Homebrew, run the following:
```console
brew upgrade cog
```## Next steps
- [Get started with an example model](docs/getting-started.md)
- [Get started with your own model](docs/getting-started-own-model.md)
- [Using Cog with notebooks](docs/notebooks.md)
- [Using Cog with Windows 11](docs/wsl2/wsl2.md)
- [Take a look at some examples of using Cog](https://github.com/replicate/cog-examples)
- [Deploy models with Cog](docs/deploy.md)
- [`cog.yaml` reference](docs/yaml.md) to learn how to define your model's environment
- [Prediction interface reference](docs/python.md) to learn how the `Predictor` interface works
- [Training interface reference](docs/training.md) to learn how to add a fine-tuning API to your model
- [HTTP API reference](docs/http.md) to learn how to use the HTTP API that models serve## Need help?
[Join us in #cog on Discord.](https://discord.gg/replicate)
## Contributors โจ
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
Ben Firshman
๐ป ๐
Andreas Jansson
๐ป ๐ ๐ง
Zeke Sikelianos
๐ป ๐ ๐ง
Rory Byrne
๐ป ๐ โ ๏ธ
Michael Floering
๐ป ๐ ๐ค
Ben Evans
๐
shashank agarwal
๐ป ๐
VictorXLR
๐ป ๐ โ ๏ธ
hung anna
๐
Brian Whitman
๐
JimothyJohn
๐
ericguizzo
๐
Dominic Baggott
๐ป โ ๏ธ
Dashiell Stander
๐ ๐ป โ ๏ธ
Shuwei Liang
๐ ๐ฌ
Eric Allam
๐ค
Ivรกn Perdomo
๐
Charles Frye
๐
Luan Pham
๐ ๐
TommyDew
๐ป
Jesse Andrews
๐ป ๐ โ ๏ธ
Nick Stenning
๐ป ๐ ๐จ ๐ โ ๏ธ
Justin Merrell
๐
Rurik Ylรค-Onnenvuori
๐
Youka
๐
Clay Mullis
๐
Mattt
๐ป ๐ ๐
Eng Zer Jun
โ ๏ธ
BB
๐ป
williamluer
๐
Simon Eskildsen
๐ป
F
๐ ๐ป
Philip Potter
๐ ๐ป
Joanne Chen
๐
technillogue
๐ป
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!