{"id":13533892,"url":"https://github.com/run-ai/genv","last_synced_at":"2025-05-16T07:04:38.749Z","repository":{"id":61588276,"uuid":"532524391","full_name":"run-ai/genv","owner":"run-ai","description":"GPU environment and cluster management with LLM support","archived":false,"fork":false,"pushed_at":"2024-05-16T11:12:54.000Z","size":9868,"stargazers_count":605,"open_issues_count":14,"forks_count":35,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-05-11T15:02:24.430Z","etag":null,"topics":["bash","container-runtime","containers","data-science","deep-learning","docker","gpu","gpus","jupyter-notebook","jupyterlab-extension","k8s","kubernetes","llm-inference","llms","nvidia-gpu","ollama","ray","vscode","vscode-extension","zsh"],"latest_commit_sha":null,"homepage":"https://www.genv.dev","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"agpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/run-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-09-04T11:49:01.000Z","updated_at":"2025-05-09T11:03:36.000Z","dependencies_parsed_at":"2024-02-18T13:45:53.588Z","dependency_job_id":"c095de5d-8631-4d31-a250-7d25230a5912","html_url":"https://github.com/run-ai/genv","commit_stats":null,"previous_names":[],"tags_count":22,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/run-ai%2Fgenv","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/run-ai%2Fgenv/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/run-ai%2Fgenv/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/run-ai%2Fgenv/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/run-ai","download_url":"https://codeload.github.com/run-ai/genv/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254485054,"owners_count":22078767,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bash","container-runtime","containers","data-science","deep-learning","docker","gpu","gpus","jupyter-notebook","jupyterlab-extension","k8s","kubernetes","llm-inference","llms","nvidia-gpu","ollama","ray","vscode","vscode-extension","zsh"],"created_at":"2024-08-01T07:01:24.069Z","updated_at":"2025-05-16T07:04:33.740Z","avatar_url":"https://github.com/run-ai.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"images/genv blade landscape black@4x.png#gh-light-mode-only\" width=\"600\" alt=\"genv\"/\u003e\n  \u003cimg src=\"images/genv blade landscape white@4x.png#gh-dark-mode-only\" width=\"600\" alt=\"genv\"/\u003e\n\u003c/p\u003e\n\n# Genv - GPU Environment and Cluster Management\n[![Join the community at (https://discord.gg/zN3Q9pQAuT)](https://img.shields.io/badge/Discord-genv-7289da?logo=discord)](https://discord.gg/zN3Q9pQAuT)\n[![Docs](https://img.shields.io/badge/docs-genv-blue)](https://docs.genv.dev/)\n[![PyPI](https://img.shields.io/pypi/v/genv)](https://pypi.org/project/genv/)\n[![PyPI - Downloads](https://img.shields.io/pypi/dm/genv?label=pypi%20downloads)](https://pypi.org/project/genv/)\n[![Conda](https://img.shields.io/conda/v/conda-forge/genv?label=conda)](https://anaconda.org/conda-forge/genv)\n[![Conda - Downloads](https://img.shields.io/conda/dn/conda-forge/genv?label=conda%20downloads)](https://anaconda.org/conda-forge/genv)\n\nGenv is an open-source environment and cluster management system for GPUs.\n\nGenv lets you easily control, configure, monitor and enforce the GPU resources that you are using in a GPU machine or cluster.\n\nIt is intendend to ease up the process of GPU allocation for data scientists without code changes 💪🏻\n\nCheck out the Genv [documentation site for more details](https://docs.genv.dev) and [the website](https://genv.dev/) for a higher-level overview of all features.\n\nThis project was highly inspired by [pyenv](https://github.com/pyenv/pyenv) and other version, package and environment management software like [Conda](https://docs.conda.io/projects/conda/en/latest/), [nvm](https://github.com/nvm-sh/nvm), [rbenv](https://github.com/rbenv/rbenv).\n\n![Example](images/example.png)\n\n## :question: Why Genv?\n\n- Easily share GPUs with your teammates\n- Find available GPUs for you to use: on-prem or on cloud via remote access\n- Switch between GPUs without code changes\n- Save time while collaborating\n- Serve and manage local LLMs within your team’s cluster\n\nPlus, it's 100% free and gets installed before you can say Jack Robinson.\n\n## :raising_hand: Who uses Genv? \n### Data Scientists \u0026 ML Engineers, who:\n- Share GPUs within a research team\n  - Pool GPUs from multiple machines (see [here](images/Pool_resources.gif)), and allocate the available machine without SSH-ing every one of them \n  - Enforce GPU quotas for each team member, ensuring equitable resource allocation (see [here](images/Enforcement.gif))\n  - Reserve GPUs by creating a Genv environment for as long as you use them with no one else hijacking them (see [here](images/Fractions.gif))\n- Share GPUs between different projects\n  - Allocate GPUs across different projects by creating distinct Genv environments, each with specific memory requirements \n  - Save environment configurations to seamlessly resume work and reproduce experiment settings at a later time (see [here](images/Infra_as_Code.gif))\n- Serve local open-source LLMs for faster experimentation within the whole team\n  - Deploy local open-source LLMs for accelerated experimentation across the entire team\n  - Efficiently run open-source models within the cluster\n\n### Admins, who:\n- Monitor their team’s GPU usage with Grafana dashboard (see the image below)\n- Enforce GPU quotas (number of GPUs and amount of memory) to researchers for a fair game within the team (see [here](images/Enforcement.gif))\n  \n\n\u003cimg src=\"images/Genv_grafana.png\" alt=\"genv grafana dashboard\"/\u003e\n\n## Ollama 🤝 Genv\nReady to create an LLM playground for yourself and your teammates? \n\nGenv integrates with Ollama for managing Large Language Models (LLMs). This allows users to efficiently run, manage, and utilize LLMs on GPUs within their clusters.\n```\n$ genv remote -H gpu-server-1, gpu-server-2 llm serve llama2 --gpus 1\n```\nCheck out our [documentation](https://docs.genv.dev/llms/overview.html) for more information.\n\n## 🏃 Quick Start\nMake sure that you are running on a GPU machine:\n```\n$ nvidia-smi\nTue Apr  4 11:17:31 2023\n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.4     |\n|-------------------------------+----------------------+----------------------+\n...\n```\n1. Install Genv\n\n- Using pip\n```\npip install genv\n```\n- Using conda\n```\nconda install -c conda-forge genv\n```\n2. Verify the installation by running the command:\n```\n$ genv status\nEnvironment is not active\n```\n3. Activate an environment (in this example, we activate an environment named \u003cem\u003emy-env\u003c/em\u003e that contains \u003cem\u003e1 GPU\u003c/em\u003e and will have \u003cem\u003e4GB\u003c/em\u003e of memory)\n```\n$ genv activate –-name my-env —-gpus 1\n(genv:my-env)$ genv config gpu-memory 4g\n(genv:my-env)$ genv status\nEnvironment is active (22716)\nAttached to GPUs at indices 0\n\nConfiguration\n   Name: my-env\n   Device count: 1\n   GPU memory capacity: 4g\n```\n4. Start working on your project!\n\n## :scroll: Documentation\n\nCheck out the Genv [documentation site](https://docs.genv.dev) for more details.\n\n\n## :dizzy: Simple Integration \u0026 Usage with your fav IDE\n\n\nIntegration with VSCode [(Take me to the installation guide!)](https://github.com/run-ai/vscode-genv) |\n:-------------------------:|\n\u003cimg src=\"images/overview.gif\" width=\"800\" alt=\"genv vscode\"/\u003e |\n\n\u003cbr /\u003e\n\nIntegration with JupyterLab [(Take me to the installation guide!)](https://github.com/run-ai/jupyterlab_genv) |\n:-------------------------:|\n\u003cimg src=\"images/overview_jupyterlab.gif\" width=\"800\" alt=\"genv jupyterlab\"/\u003e |\n\n\n\nA PyCharm integration is also in our roadmap so stay tuned!\n\n\n\n## 🏃🏻 Join us in the AI Infrastructure Club\n\n[\u003cimg src=\"https://img.shields.io/badge/Discord-Join%20the%20community!-7289da?style=for-the-badge\u0026logo=discord\u0026logoColor=7289da\" height=\"30\" /\u003e](https://discord.gg/zN3Q9pQAuT)\n\nWe love connecting with our community, discussing best practices, discovering new tools, and exchanging ideas with makers about anything around making \u0026 AI infrastructure. So we created a space for all these conversations. Join our Discord server for:\n\n- Genv Installation and setup support as well as best practice tips and tricks directly for your use-case\n- Discussing possible features for Genv (we prioritize your requests) \n- Chatting with other makers about their projects \u0026 picking their brain up when you need help\n- Monthly Beers with Engineers sessions with amazing guests from the research and industry ([Link to previous session recordings](https://www.youtube.com/@runai_/search?query=beers%20with%20engineers))\n\n\n## License\nThe Genv software is Copyright 2022 [Run.ai Labs, Ltd.].\nThe software is licensed by Run.ai under the AGPLv3 license.\nPlease note that Run.ai’s intention in licensing the software are that the obligations of licensee pursuant to the AGPLv3 license should be interpreted broadly.\nFor example, Run.ai’s intention is that the terms “work based on the Program” in Section 0 of the AGPLv3 license, and “Corresponding Source” in Section 1 of the AGPLv3 license, should be interpreted as broadly as possible to the extent permitted under applicable law.\n","funding_links":[],"categories":["Software","Python"],"sub_categories":["Trends"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frun-ai%2Fgenv","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frun-ai%2Fgenv","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frun-ai%2Fgenv/lists"}