{"id":23050584,"url":"https://github.com/ai-ql/openai-proxy-docker","last_synced_at":"2025-08-15T03:31:30.355Z","repository":{"id":187412816,"uuid":"676866936","full_name":"AI-QL/openai-proxy-docker","owner":"AI-QL","description":"API Proxy | API 代理 | Прокси | وكيل | OpenAI | Nvidia-NIM | Claude","archived":false,"fork":false,"pushed_at":"2024-11-28T07:13:32.000Z","size":54,"stargazers_count":14,"open_issues_count":0,"forks_count":7,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-11-28T08:23:01.647Z","etag":null,"topics":["chatgpt","claude-api","docker","docker-compose","nvidia-nim","openai-api","proxy"],"latest_commit_sha":null,"homepage":"https://api.aiql.com/","language":"Dockerfile","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AI-QL.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-08-10T07:37:04.000Z","updated_at":"2024-11-28T07:21:01.000Z","dependencies_parsed_at":"2024-04-29T09:42:22.688Z","dependency_job_id":"115674dc-a5e4-409c-b0fd-c099d048a289","html_url":"https://github.com/AI-QL/openai-proxy-docker","commit_stats":null,"previous_names":["qiushihao/openai-proxy-docker","aiql-community/openai-proxy-docker","aiql-admin/openai-proxy-docker","ai-ql/openai-proxy-docker"],"tags_count":5,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-QL%2Fopenai-proxy-docker","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-QL%2Fopenai-proxy-docker/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-QL%2Fopenai-proxy-docker/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AI-QL%2Fopenai-proxy-docker/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AI-QL","download_url":"https://codeload.github.com/AI-QL/openai-proxy-docker/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":229890133,"owners_count":18140041,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chatgpt","claude-api","docker","docker-compose","nvidia-nim","openai-api","proxy"],"created_at":"2024-12-15T23:34:35.946Z","updated_at":"2025-08-15T03:31:30.313Z","avatar_url":"https://github.com/AI-QL.png","language":"Dockerfile","readme":"# OpenAI Proxy Docker\n\n[![Docker Pulls](https://img.shields.io/docker/pulls/aiql/openai-proxy-docker.svg)](https://hub.docker.com/r/aiql/openai-proxy-docker)\n[![LICENSE](https://img.shields.io/github/license/AI-QL/openai-proxy-docker)](https://github.com/AI-QL/openai-proxy-docker/blob/main/LICENSE)\n\nThis repository provides a Dockerized proxy for accessing the OpenAI API, allowing for simplified and streamlined interaction with the model.\n\nWith the [Docker image](https://hub.docker.com/r/aiql/openai-proxy-docker), you can easily deploy a proxy instance to serve as a gateway between your application and the OpenAI API, reducing the complexity of API interactions and enabling more efficient development.\n\n## Use case\n\n1. For users who are restricted from direct access to the OpenAI API, particularly those in countries where OpenAI will be blocking API access starting July 2024\n2. For users who need to access private APIs that lack Cross-Origin Resource Sharing (CORS) headers, this solution provides a proxy to bypass CORS restrictions and enable seamless API interactions. \n\n## Demo\n\n- #### API demo https://api.aiql.com\n- #### UI demo [ChatUI](https://github.com/AI-QL/chat-ui)\n\n### For detailed usage of OpenAI API, please check:\n- #### [OpenAI API Reference](https://platform.openai.com/docs/api-reference/introduction) (official docs)\n- #### [RESTful OpenAPI](https://api-ui.aiql.com) (provided by AIQL)\n\n\n## Run remotely via Docker\n\nExecute this command to start the proxy with default settings:\n\n```shell\nsudo docker run -d -p 9017:9017 aiql/openai-proxy-docker:latest\n```\n\nThen, you can access it by ```YOURIP:9017```\n\n\u003e For example, the proxied OpenAI Chat Completion API will be: ```YOURIP:9017/v1/chat/completions```\n\u003e \n\u003e It should be the same as ```api.openai.com/v1/chat/completions```\n\nYou can change default port and default target by setting `-e` in docker, which means that you can use it for any backend followed by OpenAPI format:\n\n| Parameter | Default Value |\n| --------- | ------------- |\n| PORT      | 9017          |\n| TARGET    | https://api.openai.com |\n\n\n## Run locally via NPX\n\nExecute this command to start the proxy with default settings:\n\n```shell\nnpx @ai-ql/api-proxy\n```\n\nTo skip installation prompts and specify parameters:\n\n```shell\nnpx -y @ai-ql/api-proxy --target=\"https://api.deepinfra.com/v1/openai\" --port=\"9019\"\n```\n\n\n## How to dev\n\nClick below to use the GitHub Codespace:\n\n[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/aiql-community/openai-proxy-docker?quickstart=1)\n\nOr fork this repo and create a codespace manually:\n1. Wait for env ready in your browser\n2. `npm install ci`\n3. `npm start`\n\nAnd then, the codespace will provide a forward port (default 9017) for you to check the running.\n\nIf everything is OK, check the docker by:\n```\ndocker build .\n```\n\n## Docker Push\n\nIf you want to maintaine your own docker image, refer to github [Actions](./.github/workflows/docker-image.yml)\n\nFork this repo and set `DOCKERHUB_USERNAME` and `DOCKERHUB_TOKEN` in your secrets\n\nNormally, the step should be:\n\n1. [Fork](https://github.com/aiql-community/openai-proxy-docker/fork) this repo\n2. Settings →  Secrets and variables → Actions → New repository secret\n\n## Docker Compose\n\n### Example 1\nYou can apply this approach to other APIs, such as Nvidia NIM:\n- The proxied Nvidia NIM Completion API will be: `YOURIP:9101/v1/chat/completions`\n  \u003e For convenience, a readily available API is provided for those who prefer not to deploy it independently: `https://nvidia.aiql.com/v1/chat/completions`\n\n```DOCKERFILE\nservices:\n  nvidia-proxy:\n    image: aiql/openai-proxy-docker:latest\n    container_name: nvidia-proxy\n    environment:\n      PORT: \"9101\"\n      TARGET: \"https://integrate.api.nvidia.com\"\n    restart: always\n    network_mode: host\n```\n\n### Example 2\nYou can apply this approach with your own domain over HTTPS:\n- `YOUREMAILADDR@example.com` will be used to get certification notification from ACME server\n- The proxied OpenAI Chat Completion API will be: `api.example.com/v1/chat/completions`\n  \u003e `api.example.com` should be replaced by your domain name\n\n```DOCKERFILE\nservices:\n  nginx-proxy:\n    image: nginxproxy/nginx-proxy\n    container_name: nginx-proxy\n    ports:\n      - \"80:80\"\n      - \"443:443/tcp\"\n      - \"443:443/udp\"\n    environment:\n      ENABLE_HTTP3: \"true\"\n    volumes:\n      - conf:/etc/nginx/conf.d\n      - vhost:/etc/nginx/vhost.d\n      - html:/usr/share/nginx/html\n      - certs:/etc/nginx/certs:ro\n      - /var/run/docker.sock:/tmp/docker.sock:ro\n    restart: always\n    network_mode: bridge\n\n  acme-companion:\n    image: nginxproxy/acme-companion\n    container_name: nginx-proxy-acme\n    environment:\n      - DEFAULT_EMAIL=YOUREMAILADDR@example.com\n    volumes_from:\n      - nginx-proxy\n    volumes:\n      - certs:/etc/nginx/certs:rw\n      - acme:/etc/acme.sh\n      - /var/run/docker.sock:/var/run/docker.sock:ro\n    network_mode: bridge\n\n  openai-proxy:\n    image: aiql/openai-proxy-docker:latest\n    container_name: openai-proxy\n    environment:\n      LETSENCRYPT_HOST: api.example.com\n      VIRTUAL_HOST: api.example.com\n      VIRTUAL_PORT: \"9017\"\n    network_mode: host\n    depends_on:\n      - \"nginx-proxy\"\n\nvolumes:\n  conf:\n  vhost:\n  html:\n  certs:\n  acme:\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fai-ql%2Fopenai-proxy-docker","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fai-ql%2Fopenai-proxy-docker","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fai-ql%2Fopenai-proxy-docker/lists"}