Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/run-house/runhouse
The fastest way to run and deploy AI jobs and services on any infra. Unobtrusive, debuggable, PyTorch-like APIs.
https://github.com/run-house/runhouse
api artificial-intelligence aws azure collaboration data-science deployment distributed fastapi gcp infrastructure machine-learning middleware observability python pytorch ray sagemaker serverless
Last synced: 4 months ago
JSON representation
The fastest way to run and deploy AI jobs and services on any infra. Unobtrusive, debuggable, PyTorch-like APIs.
- Host: GitHub
- URL: https://github.com/run-house/runhouse
- Owner: run-house
- License: apache-2.0
- Created: 2022-05-10T14:10:51.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-03-12T15:02:00.000Z (4 months ago)
- Last Synced: 2024-03-12T16:36:58.550Z (4 months ago)
- Topics: api, artificial-intelligence, aws, azure, collaboration, data-science, deployment, distributed, fastapi, gcp, infrastructure, machine-learning, middleware, observability, python, pytorch, ray, sagemaker, serverless
- Language: Python
- Homepage: https://run.house
- Size: 27.9 MB
- Stars: 687
- Watchers: 7
- Forks: 31
- Open Issues: 28
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Security: docs/security-and-authentication.rst
Lists
- awesome-stars - run-house/runhouse - Write local debuggable Python which traverses your powerful remote infra. Deploy as-is. Unobtrusive, unopinionated, PyTorch-like APIs. (Python)
- awesome-stars - run-house/runhouse - PyTorch for building ML systems. Iterable, debuggable, multi-cloud, 100% reproducible across research and production. (Python)
README
# ๐โโ๏ธRunhouse๐
[![Discord](https://dcbadge.vercel.app/api/server/RnhB6589Hs?compact=true&style=flat)](https://discord.gg/RnhB6589Hs)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/runhouse_.svg?style=social&label=@runhouse_)](https://twitter.com/runhouse_)
[![Website](https://img.shields.io/badge/run.house-green)](https://www.run.house)
[![Docs](https://img.shields.io/badge/docs-blue)](https://www.run.house/docs)
[![Den](https://img.shields.io/badge/runhouse_den-purple)](https://www.run.house/login)## ๐ต Welcome Home!
Runhouse is the fastest way to build, run, and deploy production-quality AI apps and workflows on your own compute.
Leverage simple, powerful APIs for the full lifecycle of AI development, through
researchโevaluationโproductionโupdatesโscalingโmanagement, and across any infra.By automatically packaging your apps into scalable, secure, and observable services, Runhouse can also turn
otherwise redundant AI activities into common reusable components across your team or company, which improves
cost, velocity, and reproducibility.Highlights:
* ๐ฉโ๐ฌ Dispatch Python functions, classes, and data to remote infra (clusters, cloud VMs, etc.) instantly. No need to
reach for a workflow orchestrator to run different chunks of code on various beefy boxes.
* ๐ทโโ๏ธ Deploy Python functions or classes as production-quality services instantly, including HTTPS, auth, observability,
scaling, custom domains, secrets, versioning, and more. No research-to-production gap.
* ๐ No DSL, decorators, yaml, CLI incantations, or boilerplate. Just your own Python.
* ๐ฉโ๐ Extensive support for Ray, Kubernetes, AWS, GCP, Azure, local, on-prem, and more. When you want to shift or scale,
just send your app to more powerful infra.
* ๐ฉโ๐ Extreme reusability and portability. A single succinct script can stand up your app, dependencies, and infra.
* ๐ฉโ๐ณ Arbitrarily nest applications to create complex workflows and services. Apps are decoupled so you can change,
move, or scale any component without affecting the rest of your system.The Runhouse API is dead simple. Send your **apps** (functions and classes) into **environments** on compute
**infra**, like this:```python
import runhouse as rh
from diffusers import StableDiffusionPipelinedef sd_generate(prompt, **inference_kwargs):
model = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-base").to("cuda")
return model(prompt, **inference_kwargs).imagesif __name__ == "__main__":
gpu = rh.cluster(name="rh-a10x", instance_type="A10G:1", provider="aws")
sd_env = rh.env(reqs=["torch", "transformers", "diffusers"], name="sd_generate", working_dir="./")# Deploy the function and environment (syncing over local code changes and installing dependencies)
remote_sd_generate = rh.function(sd_generate).to(gpu, env=sd_env)# This call is actually an HTTP request to the app running on the remote server
imgs = remote_sd_generate("A hot dog made out of matcha.")
imgs[0].show()# You can also call it over HTTP directly, e.g. from other machines or languages
print(remote_sd_generate.endpoint())
```With the above simple structure you can run, deploy, and share:
* ๐ ๏ธ **AI primitives**: Preprocessing, training, fine-tuning, evaluation, inference
* ๐ **Higher-order services**: Multi-stage inference (e.g. RAG), e2e workflows
* ๐ฆบ **Controls and safety**: PII obfuscation, content moderation, drift detection
* ๐ **Data services**: ETL, caching, data augmentation, data validation## ๐๏ธ Share Apps and Resources with Runhouse Den
You can unlock unique portability and sharing features by creating a
[Runhouse Den account](https://www.run.house/dashboard).
Log in from anywhere to save, share, and load resources:
```shell
runhouse login
```
or from Python:
```python
import runhouse as rh
rh.login()
```Extending the example above to share and load our app via Den:
```python
remote_sd_generate.share(["[email protected]"])# The service stub can now be reloaded from anywhere, always at yours and your collaborators' fingertips
# Notice this code doesn't need to change if you update, move, or scale the service
remote_sd_generate = rh.function("/your_username/sd_generate")
imgs = remote_sd_generate("More matcha hotdogs.")
imgs[0].show()
```##
๐๏ธ Supported Compute Infra
Please reach out (first name at run.house) if you don't see your favorite compute here.
- Local - **Supported**
- Single box - **Supported**
- Ray cluster - **Supported**
- Kubernetes (K8S) - **Supported**
- Amazon Web Services (AWS)
- EC2 - **Supported**
- EKS - **Supported**
- SageMaker - **Supported**
- Lambda - **Alpha**
- Google Cloud Platform (GCP)
- GCE - **Supported**
- GKE - **Supported**
- Microsoft Azure
- VMs - **Supported**
- AKS - **Supported**
- Lambda Labs - **Supported**
- Modal Labs - Planned
- Slurm - Exploratory## ๐จโ๐ซ Learn More
[**๐ฃ Getting Started**](https://www.run.house/docs/tutorials/cloud_quick_start): Installation, setup, and a quick walkthrough.
[**๐ Docs**](https://www.run.house/docs):
Detailed API references, basic API examples and walkthroughs, end-to-end tutorials, and high-level architecture overview.[**๐ช Funhouse**](https://github.com/run-house/funhouse): Standalone ML apps and examples to try with Runhouse, like image generation models, LLMs,
launching Gradio spaces, and more![**๐ฉโ๐ป Blog**](https://www.run.house/blog): Deep dives into Runhouse features, use cases, and the future of AI
infra.[**๐พ Discord**](https://discord.gg/RnhB6589Hs): Join our community to ask questions, share ideas, and get help.
[**๐ Twitter**](https://twitter.com/runhouse_): Follow us for updates and announcements.
## ๐โโ๏ธ Getting Help
Message us on [Discord](https://discord.gg/RnhB6589Hs), email us (first name at run.house), or create an issue.
## ๐ทโโ๏ธ Contributing
We welcome contributions! Please check out [contributing](CONTRIBUTING.md) if you're interested.