https://github.com/aahl/codex
🧑💻 CodeX in docker
https://github.com/aahl/codex
ai-coding claude-code codex
Last synced: 4 months ago
JSON representation
🧑💻 CodeX in docker
- Host: GitHub
- URL: https://github.com/aahl/codex
- Owner: aahl
- Created: 2025-09-22T07:06:32.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-09-22T07:42:52.000Z (5 months ago)
- Last Synced: 2025-09-22T09:23:54.465Z (5 months ago)
- Topics: ai-coding, claude-code, codex
- Language: Dockerfile
- Homepage:
- Size: 3.91 KB
- Stars: 1
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 🧑💻 CodeX in Docker
## 🐳 Usage
```bash
docker run --rm -it -v "$(pwd):/app" -v ~/.codex:/root/.codex ghcr.io/aahl/codex
```
## 🇨🇳 GLM 4.5
使用GLM的最新模型(glm-4.5)驱动您的CodeX。[申请ApiKey](https://www.bigmodel.cn/invite?icode=EwilDKx13%2FhyODIyL%2BKabHHEaazDlIZGj9HxftzTbt4%3D)
> GLM还提供了其他很多[免费好用](https://docs.bigmodel.cn/cn/guide/models/free/glm-4.5-flash)的模型!
```toml
# vim ~/.codex/config.toml
[model_providers.glm]
name = "GLM Coding Plan"
env_key = "GLM_AUTH_TOKEN"
base_url = "https://open.bigmodel.cn/api/coding/paas/v4"
[profiles.glm45]
model = "glm-4.5"
model_provider = "glm"
[profiles.glm45flash]
model = "glm-4.5-flash"
model_provider = "glm"
```
```bash
# vim ~/.bashrc
export GLM_AUTH_TOKEN=your_glm_token
alias codex-glm='docker run --rm -it -v "$(pwd):/app" -v ~/.codex:/root/.codex -e GLM_AUTH_TOKEN="$GLM_AUTH_TOKEN" -e APT_MIRROR=mirrors.ustc.edu.cn -e NPM_REGISTRY=https://registry.npmmirror.com ghcr.nju.edu.cn/aahl/codex codex --profile glm45flash'
```
## 🔗 Links
- [Docker Image Tags](https://github.com/aahl/codex/pkgs/container/codex/versions?filters[version_type]=tagged)
- https://zread.ai/aahl/codex
- https://linux.do/t/topic/974745
- https://www.bigmodel.cn/invite?icode=EwilDKx13%2FhyODIyL%2BKabHHEaazDlIZGj9HxftzTbt4%3D