https://github.com/av/tools
Docker image for installing and running tools for LLM agents (MCP, OpenAPI, UVX, NPX, Python)
https://github.com/av/tools
agents containerization docker image llm mcp openapi package tools
Last synced: 17 days ago
JSON representation
Docker image for installing and running tools for LLM agents (MCP, OpenAPI, UVX, NPX, Python)
- Host: GitHub
- URL: https://github.com/av/tools
- Owner: av
- License: mit
- Created: 2025-04-05T08:49:48.000Z (21 days ago)
- Default Branch: main
- Last Pushed: 2025-04-05T10:14:04.000Z (21 days ago)
- Last Synced: 2025-04-05T10:27:00.809Z (21 days ago)
- Topics: agents, containerization, docker, image, llm, mcp, openapi, package, tools
- Language: Dockerfile
- Homepage:
- Size: 21.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README

Docker image for installing and running tools for LLM agents (MCP, OpenAPI, UVX, NPX, Python)
### Features
- Python / Node.js runtime - includes `python`, `node`, `uvx`, `npx`
- Includes extra packages for managing MCP/OpenAPI tools and connections
- [`mcpo`](https://github.com/open-webui/mcpo) - MCP to OpenAPI bridge
- [`supergateway`](https://github.com/supercorp-ai/supergateway) - MCP STDIO/SSE bridge
- [`@modelcontextprotocol/inspector`](https://github.com/modelcontextprotocol/inspector) - debugging tool for MCP
- Utils: `curl`, `jq`, `git`
- Easy unified cache at `/app/cache` for all tools### Usage
```bash
# Launch MCP tools in stdio mode
docker run ghcr.io/av/tools uvx mcp-server-time# Bridge from MCP to OpenAPI
docker run -p 8000:8000 ghcr.io/av/tools uvx mcpo -- uvx mcp-server-time --local-timezone=America/New_York
# http://0.0.0.0:8000/docs -> see endpoint documentation# Run MCP inspector
docker run -p 6274:6274 -p 6277:6277 ghcr.io/av/tools npx @modelcontextprotocol/inspector# Persist the cache volume for quick restarts
# -v cache:/app/cache - named docker volume
# -v /path/to/my/cache:/app/cache - cache on the host
docker run -v cache:/app/cache ghcr.io/av/tools uvx mcp-server-time
```In docker compose:
```yaml
services:
time:
image: ghcr.io/av/tools
command: uvx mcp-server-time
volumes:
- cache:/app/cachefetch:
image: ghcr.io/av/tools
command: uvx mcpo -- uvx mcp-server-fetch
ports:
- 7133:8000
volumes:
- cache:/app/cache
```---
Check out [Harbor](https://github.com/av/harbor) for a complete dockerized LLM environment.