{"id":30681440,"url":"https://github.com/futursolo/pai","last_synced_at":"2026-04-08T13:02:20.255Z","repository":{"id":310060360,"uuid":"1038501529","full_name":"futursolo/pai","owner":"futursolo","description":"Collection of AI Containers - Prebuilt and Ready-to-Use","archived":false,"fork":false,"pushed_at":"2026-03-13T13:04:37.000Z","size":184,"stargazers_count":0,"open_issues_count":4,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-03-13T23:48:25.380Z","etag":null,"topics":["ai","comfyui","containers","docker","ipex","ipex-llm","koboldcpp","llamacpp","llm","ollama","rocm"],"latest_commit_sha":null,"homepage":"","language":"Dockerfile","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/futursolo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-08-15T10:13:31.000Z","updated_at":"2026-03-13T11:26:46.000Z","dependencies_parsed_at":"2025-08-15T14:29:29.437Z","dependency_job_id":"8c09204e-b63e-435f-a96d-775579b6f590","html_url":"https://github.com/futursolo/pai","commit_stats":null,"previous_names":["futursolo/portable-ai","futursolo/pai"],"tags_count":70,"template":false,"template_full_name":null,"purl":"pkg:github/futursolo/pai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/futursolo%2Fpai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/futursolo%2Fpai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/futursolo%2Fpai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/futursolo%2Fpai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/futursolo","download_url":"https://codeload.github.com/futursolo/pai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/futursolo%2Fpai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31556239,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-08T10:21:54.569Z","status":"ssl_error","status_checked_at":"2026-04-08T10:21:38.171Z","response_time":54,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","comfyui","containers","docker","ipex","ipex-llm","koboldcpp","llamacpp","llm","ollama","rocm"],"created_at":"2025-09-01T17:11:16.321Z","updated_at":"2026-04-08T13:02:20.226Z","avatar_url":"https://github.com/futursolo.png","language":"Dockerfile","readme":"# Pai - Portable Artificial Intelligence\n\nThe Pai (Portable Artificial Intelligence) Project is an initiative to produce a AI containers that can be run with a single command.\n\n### Rationale\n\nThe primary purpose of this project is to reduce the burden to manage dependencies.\nSince different application may require different ROCm or One-API versions,\nthis creates compatibility issues when trying to run multiple applications on the same machine.\n\nThis project solves this issue by packaging all necessary dependencies in the container and provides images\nthat can be executed with a single `docker run` command.\n\nUnless specified otherwise, it should work with the `xe` or `amdgpu` driver from the mainline linux kernel.\nVendor drivers are optional.\n\n### Supported Variants\n\nThe following variants are usually provided:\n\n- `vulkan` (Any Vulkan Compatible GPUs)\n- `intel` / `ipex` (Intel Arc)\n- `rocm` (AMD Radeon)\n\n\\* CUDA is not supported because I currently do not have any NVIDIA Graphics Cards to test CUDA-based images\nand CUDA-based images are usually already available from other sources.\n\n### Supported Apps\n\n- [KoboldCPP](./apps/koboldcpp/README.md)\n  - `rocm`:`ghcr.io/futursolo/pai-apps/koboldcpp:rocm`\n  - `vulkan`: `ghcr.io/futursolo/pai-apps/koboldcpp:vulkan`\n- [ComfyUI](./apps/comfyui/README.md)\n  - `rocm`: `ghcr.io/futursolo/pai-apps/comfyui:rocm`\n  - `ipex`: `ghcr.io/futursolo/pai-apps/comfyui:ipex`\n- [Ollama](./apps/ollama/README.md)\n  - `intel`: `ghcr.io/futursolo/pai-apps/ollama:intel`\n\n### Permissions and Capabilities\n\nIn order for containers to access necessary accelerators, the following configuration is required:\n\n#### ROCm\n\nThe container must have access to the following devices:\n\n- `/dev/dri` (Direct Rendering Infrastructure)\n- `/dev/kfd` (AMD Kernel Fusion Driver)\n\nThey can be mapped like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    devices:\n      - /dev/dri\n      - /dev/kfd\n```\n\nThe user inside container must have access to the following groups:\n\n- `video`\n  - This group usually has a fixed Group ID (44) and is embedded in the container, it can be referenced as name of the container.\n- `render`\n  - This group has a dynamic Group ID and it must match the host environment.\n  - You can figure this out with: `getent group render | awk '{split($0,a,\":\"); print a[3]}'`\n\nThey can be added like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    group_add:\n      - video\n      - $GROUP_ID_RENDER # This must match the group id of machine that runs the container, see above.\n```\n\nOptionally, the container can enable CPU / GPU memory mapping to improve performance.\n\nIt can be configured like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    security_opt:\n      - seccomp:unconfined\n```\n\nOptionally, the container can increase the shared memory size to improve performance.\n\nIt can be configured like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    shm_size: 16G\n```\n\n#### Intel / IPEX\n\nThe container must have access to the following devices:\n\n- `/dev/dri` (Direct Rendering Infrastructure)\n\nThey can be mapped like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    devices:\n      - /dev/dri\n```\n\nThe user inside container must have access to the following groups:\n\n- `video`\n  - This group usually has a fixed Group ID (44) and is embedded in the container, it can be referenced as name of the container.\n- `render`\n  - This group has a dynamic Group ID and it must match the host environment.\n  - You can figure this out with: `getent group render | awk '{split($0,a,\":\"); print a[3]}'`\n\nThey can be added like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    group_add:\n      - video\n      - $GROUP_ID_RENDER # This must match the group id of machine that runs the container, see above.\n```\n\nOptionally, the container can enable CPU / GPU memory mapping to improve performance.\n\nIt can be configured like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    security_opt:\n      - seccomp:unconfined\n```\n\nOptionally, the container can increase the shared memory size to improve performance.\n\nIt can be configured like the following in the `docker-compose.yml`:\n\n```yaml\nservices:\n  app:\n    # ...\n    shm_size: 16G\n```\n\nFor containers running under WSL2 (Windows Subsystem for Linux), you need the following configuration:\n\n```yaml\nservices:\n  app:\n    # ...\n    privileged: true\n    volumes:\n      - /usr/lib/wsl:/usr/lib/wsl\n```\n\n### Rootless Containers\n\nAll images are by default run with `pai-user:pai-user`(1999:1999) instead of root.\nYou may also specify containers with other uid and gid at runtime.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffutursolo%2Fpai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffutursolo%2Fpai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffutursolo%2Fpai/lists"}