{"id":47292701,"url":"https://github.com/project-david-ai/projectdavid-platform","last_synced_at":"2026-04-10T11:04:29.039Z","repository":{"id":288022179,"uuid":"966541892","full_name":"project-david-ai/projectdavid-platform","owner":"project-david-ai","description":"A single pip installed package will orchestrate a production ready instance of the AI stack in any environment","archived":false,"fork":false,"pushed_at":"2026-04-07T00:15:48.000Z","size":2549,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-04-07T02:21:10.082Z","etag":null,"topics":["agents","ai-infrastructure","assistant-computer-control","code-interpreter","deep-search","fastapi","local-inference","nginx","ollama-api","openai-api","private","rag-pipeline","search-engine","tool-calling","vector-database","vision","vllm","web-search"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/project-david-ai.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-04-15T04:50:50.000Z","updated_at":"2026-04-07T00:15:51.000Z","dependencies_parsed_at":null,"dependency_job_id":"caed56ff-b32b-4edf-9d1c-199ac80d6925","html_url":"https://github.com/project-david-ai/projectdavid-platform","commit_stats":null,"previous_names":["frankie336/entities","project-david-ai/platform-docker","project-david-ai/projectdavid-platform"],"tags_count":63,"template":false,"template_full_name":null,"purl":"pkg:github/project-david-ai/projectdavid-platform","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/project-david-ai%2Fprojectdavid-platform","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/project-david-ai%2Fprojectdavid-platform/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/project-david-ai%2Fprojectdavid-platform/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/project-david-ai%2Fprojectdavid-platform/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/project-david-ai","download_url":"https://codeload.github.com/project-david-ai/projectdavid-platform/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/project-david-ai%2Fprojectdavid-platform/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31639526,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-10T07:40:12.752Z","status":"ssl_error","status_checked_at":"2026-04-10T07:40:11.664Z","response_time":98,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai-infrastructure","assistant-computer-control","code-interpreter","deep-search","fastapi","local-inference","nginx","ollama-api","openai-api","private","rag-pipeline","search-engine","tool-calling","vector-database","vision","vllm","web-search"],"created_at":"2026-03-16T10:08:28.099Z","updated_at":"2026-04-10T11:04:29.022Z","avatar_url":"https://github.com/project-david-ai.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"# Project David Platform\n\n[![Docker Pulls](https://img.shields.io/docker/pulls/thanosprime/entities-api-api?label=API%20Pulls\u0026logo=docker\u0026style=flat-square)](https://hub.docker.com/r/thanosprime/entities-api-api)\n[![CI](https://github.com/project-david-ai/platform-docker/actions/workflows/ci.yml/badge.svg)](https://github.com/project-david-ai/platform-docker/actions/workflows/ci.yml)\n[![PyPI](https://img.shields.io/pypi/v/projectdavid-platform?style=flat-square)](https://pypi.org/project/projectdavid-platform/)\n[![License: PolyForm Noncommercial](https://img.shields.io/badge/license-PolyForm%20Noncommercial%201.0.0-blue.svg)](https://polyformproject.org/licenses/noncommercial/1.0.0/)\n\nprojectdavid-platform is a self-hosted AI runtime that implements the OpenAI Assistants API specification, deployable on your own infrastructure, fully air-gapped if required.\nYou get a production-ready API server out of the box: assistants, autonomous agents, RAG pipelines, sandboxed code execution, and multi-turn conversation, all through a single, standards-compliant REST API with full parity to OpenAI.\nConnect any model, anywhere. Run inference locally via Ollama or vLLM, route to remote providers like Together AI, or span both, all through one unified API surface. Switch providers without changing a line of application code.\nIf your stack already speaks the OpenAI Assistants API, it already speaks Project David.\n\n\n**Your models. Your data. Your infrastructure. Zero lock-in.**\n\n---\n\n[![Project David](https://raw.githubusercontent.com/frankie336/entities_api/master/assets/projectdavid_logo.png)](https://raw.githubusercontent.com/frankie336/entities_api/master/assets/projectdavid_logo.png)\n\n---\n\n## Documentation\n\n| Topic | Link |\n|---|---|\n| Full Documentation | [docs.projectdavid.co.uk](https://docs.projectdavid.co.uk/docs) |\n| Platform Overview | [docs.projectdavid.co.uk/docs/platform-overview](https://docs.projectdavid.co.uk/docs/platform-overview) |\n| Configuration Reference | [docs.projectdavid.co.uk/docs/platform-configuration](https://docs.projectdavid.co.uk/docs/platform-configuration) |\n| Upgrading | [docs.projectdavid.co.uk/docs/platform-upgrading](https://docs.projectdavid.co.uk/docs/platform-upgrading) |\n| CLI Reference | [docs.projectdavid.co.uk/docs/projectdavid-platform-commands](https://docs.projectdavid.co.uk/docs/projectdavid-platform-commands) |\n| SDK Quick Start | [docs.projectdavid.co.uk/docs/sdk-quick-start](https://docs.projectdavid.co.uk/docs/sdk-quick-start) |\n| Sovereign Forge | [docs.projectdavid.co.uk/docs/1_sovereign-forge-cluster](https://docs.projectdavid.co.uk/docs/1_sovereign-forge-cluster) |\n\n---\n\n## Installation\n\n```bash\npip install projectdavid-platform\n```\n\nNo repository clone required. The compose files and configuration templates are bundled with the package.\n\n\u003e **Windows users:** pip installs the `pdavid` command to a Scripts directory that is not on PATH by default. If `pdavid` is not found after installation, add the following to your PATH:\n\u003e\n\u003e ```\n\u003e C:\\Users\\\u003cyour-username\u003e\\AppData\\Roaming\\Python\\Python3XX\\Scripts\n\u003e ```\n\u003e\n\u003e Replace `Python3XX` with your Python version (e.g. `Python313`). On Linux and macOS this is handled automatically.\n\n---\n\n## Quick Start\n\nThis section walks you from a fresh install to your first streaming inference response.\n\n### 1. Start the stack\n\n```bash\npdavid --mode up\n```\n\nOn first run this will generate a `.env` file with unique cryptographically secure secrets, prompt for optional values, pull all required Docker images, and start the full stack in detached mode.\n\nTo start with local GPU inference via Ollama:\n\n```bash\npdavid --mode up --ollama\n```\n\nRequires an NVIDIA GPU with the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) installed.\n\nTo start the full Sovereign Forge training and inference mesh:\n\n```bash\npdavid --mode up --training\n```\n\nThis starts the training pipeline, Ray cluster, and the Ray Serve inference worker. See [Sovereign Forge](#sovereign-forge--private-training--inference-mesh) below.\n\n---\n\n### 2. Bootstrap the admin user\n\n```bash\npdavid bootstrap-admin\n```\n\nExpected output:\n\n```\n================================================================\n  ✓  Admin API Key Generated\n================================================================\n  Email   : admin@example.com\n  User ID : user_abc123...\n  Prefix  : ad_abc12\n----------------------------------------------------------------\n  API KEY : ad_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n----------------------------------------------------------------\n  This key will NOT be shown again.\n================================================================\n```\n\n\u003e ⚠️ Store this key immediately. It is shown exactly once and cannot be recovered.\n\n---\n\n### 3. Create a user and API key\n\nThe admin key provisions users. Each user gets their own API key for SDK operations.\n\n```python\nimport os\nfrom projectdavid import Entity\nfrom dotenv import load_dotenv\nload_dotenv()\n\nclient = Entity(\n    base_url=os.getenv(\"PROJECT_DAVID_PLATFORM_BASE_URL\"),\n    api_key=os.getenv(\"PROJECT_DAVID_PLATFORM_ADMIN_KEY\"),\n)\n\n# Create a user\nuser = client.users.create_user(\n    name=\"Sam Flynn\",\n    email=\"sam@encom.com\",\n)\nprint(user.id)\n\n# Create an API key for that user\napi_key = client.keys.create_key(user_id=user.id)\nprint(api_key)\n```\n\nStore `user.id` and the printed API key — you will need both for SDK operations.\n\n---\n\n### 4. Run your first inference\n\nInstall the SDK:\n\n```bash\npip install projectdavid\n```\n\n\u003e ⚠️ `projectdavid` is the developer SDK. `projectdavid-platform` is the deployment orchestrator. Do not confuse the two.\n\n```python\nimport os\nfrom projectdavid import Entity, ContentEvent, ReasoningEvent\nfrom dotenv import load_dotenv\nload_dotenv()\n\n# Use the user key — not the admin key — for application operations.\nclient = Entity(\n    base_url=os.getenv(\"PROJECT_DAVID_PLATFORM_BASE_URL\"),\n    api_key=os.getenv(\"PROJECT_DAVID_PLATFORM_USER_KEY\"),\n)\n\n# Create an assistant\nassistant = client.assistants.create_assistant(\n    name=\"Test Assistant\",\n    model=\"DeepSeek-V3\",\n    instructions=\"You are a helpful AI assistant named Nexa.\",\n    tools=[\n        {\"type\": \"web_search\"},\n    ],\n)\n\n# Create a thread — threads maintain the full message state between turns\nthread = client.threads.create_thread()\n\n# Add a message to the thread\nmessage = client.messages.create_message(\n    thread_id=thread.id,\n    assistant_id=assistant.id,\n    content=\"Find me a positive news story from today.\",\n)\n\n# Create a run\nrun = client.runs.create_run(\n    assistant_id=assistant.id,\n    thread_id=thread.id,\n)\n\n# Set up the inference stream — bring your own provider API key\nstream = client.synchronous_inference_stream\nstream.setup(\n    thread_id=thread.id,\n    assistant_id=assistant.id,\n    message_id=message.id,\n    run_id=run.id,\n    api_key=os.getenv(\"HYPERBOLIC_API_KEY\"),  # or TOGETHER_API_KEY etc.\n)\n\n# Stream the response\nfor event in stream.stream_events(model=\"hyperbolic/deepseek-ai/DeepSeek-V3\"):\n    if isinstance(event, ReasoningEvent):\n        print(event.content, end=\"\", flush=True)\n    elif isinstance(event, ContentEvent):\n        print(event.content, end=\"\", flush=True)\n```\n\n**See the complete SDK reference at [docs.projectdavid.co.uk/docs/sdk-quick-start](https://docs.projectdavid.co.uk/docs/sdk-quick-start).**\n\n---\n\n## Your Architecture\n\nDo not use the platform API as your application backend directly. The intended design is a three-tier architecture:\n\n- **projectdavid-platform** — inference orchestrator (this package)\n- **Your backend** — business logic, auth, data\n- **Your frontend** — user interface\n\nSee the [reference backend](https://github.com/project-david-ai/reference-backend) and [reference frontend](https://github.com/project-david-ai/reference-frontend) for starting points.\n\n---\n\n## Stack\n\n![Project David Stack](assets/svg/projectdavid-stack.svg)\n\n| Service | Image | Description |\n|---|---|---|\n| `api` | `thanosprime/projectdavid-core-api` | FastAPI backend exposing assistant and inference endpoints |\n| `sandbox` | `thanosprime/projectdavid-core-sandbox` | Secure code execution environment |\n| `db` | `mysql:8.0` | Relational persistence |\n| `qdrant` | `qdrant/qdrant` | Vector database for embeddings and RAG |\n| `redis` | `redis:7` | Cache and message broker |\n| `searxng` | `searxng/searxng` | Self-hosted web search |\n| `browser` | `browserless/chromium` | Headless browser for web agent tooling |\n| `otel-collector` | `otel/opentelemetry-collector-contrib` | Telemetry collection |\n| `jaeger` | `jaegertracing/all-in-one` | Distributed tracing UI |\n| `samba` | `dperson/samba` | File sharing for uploaded documents |\n| `nginx` | `nginx:alpine` | Reverse proxy — single public entry point on port 80 |\n| `ollama` | `ollama/ollama` | Local LLM inference (opt-in, `--ollama`) |\n| `inference-worker` | `thanosprime/projectdavid-core-inference-worker` | Ray HEAD node + Ray Serve inference mesh (opt-in, `--training`) |\n| `training-worker` | `thanosprime/projectdavid-core-training-worker` | Fine-tuning job runner (opt-in, `--training`) |\n| `training-api` | `thanosprime/projectdavid-core-training-api` | Fine-tuning REST API (opt-in, `--training`) |\n\n---\n\n## System Requirements\n\n| Resource | Minimum | Notes |\n|---|---|---|\n| CPU | 4 cores | 8+ recommended |\n| RAM | 16GB | 32GB+ if running the inference mesh |\n| Disk | 50GB free | SSD recommended |\n| GPU | — | Nvidia 8GB+ VRAM, optional, required only for Ollama / Sovereign Forge |\n\nRuntime dependencies: Docker Engine 24+, Docker Compose v2+, Python 3.9+. `nvidia-container-toolkit` required only for GPU services.\n\n---\n\n## Lifecycle Commands\n\n| Action | Command |\n|---|---|\n| Start the stack | `pdavid --mode up` |\n| Start with Ollama | `pdavid --mode up --ollama` |\n| Start Sovereign Forge | `pdavid --mode up --training` |\n| Start Sovereign Forge + Ollama | `pdavid --mode up --training --ollama` |\n| Pull latest images | `pdavid --mode up --pull` |\n| Stop the stack | `pdavid --mode down_only` |\n| Stop and remove all volumes | `pdavid --mode down_only --clear-volumes` |\n| Force recreate containers | `pdavid --mode up --force-recreate` |\n| Stream logs | `pdavid --mode logs --follow` |\n| Add a GPU worker node | `pdavid worker --join \u003chead-node-ip\u003e` |\n| Destroy all stack data | `pdavid --nuke` |\n\nFull CLI reference at [docs.projectdavid.co.uk/docs/projectdavid-platform-commands](https://docs.projectdavid.co.uk/docs/projectdavid-platform-commands).\n\n---\n\n## Configuration\n\n```bash\npdavid configure --set HF_TOKEN=hf_abc123\npdavid configure --set TRAINING_PROFILE=standard\npdavid configure --interactive\n```\n\nRotating `MYSQL_PASSWORD`, `MYSQL_ROOT_PASSWORD`, or `SMBCLIENT_PASSWORD` on a live stack requires a full down and volume clear. The CLI will warn you.\n\nFull configuration reference at [docs.projectdavid.co.uk/docs/platform-configuration](https://docs.projectdavid.co.uk/docs/platform-configuration).\n\n---\n\n## Upgrading\n\n```bash\npip install --upgrade projectdavid-platform\npdavid --mode up --pull\n```\n\nAfter upgrading, `pdavid` will print a notice on the next run pointing to the changelog. Running `--pull` fetches the latest container images. Your data and secrets are not affected.\n\nFull upgrade guide at [docs.projectdavid.co.uk/docs/platform-upgrading](https://docs.projectdavid.co.uk/docs/platform-upgrading).\n\n---\n\n## Sovereign Forge — Private Training + Inference Mesh\n\nProject David includes an opt-in fine-tuning and inference cluster built on Ray Serve.\nPoint it at any NVIDIA GPU — a laptop, a workstation, a gaming rig, or an H100\nrack — and it handles training job scheduling, model deployment, and inference\nrouting across all of them simultaneously. Your data and models never leave your\nmachines.\n\n![Project David Cluster](https://raw.githubusercontent.com/project-david-ai/projectdavid-core/master/assets/svg/pd_cluster.svg)\n\n```bash\npdavid --mode up --training\n```\n\nThis starts three services under a Docker Compose profile:\n\n- **`inference-worker`** — Ray HEAD node. Owns the GPU on this machine. Runs Ray Serve and hosts the InferenceReconciler. All vLLM inference is managed here. The main API's `VLLM_BASE_URL` points to this container.\n- **`training-worker`** — Fine-tuning job runner. Manages the training job lifecycle via Redis queue.\n- **`training-api`** — REST API for datasets, training jobs, and the model registry.\n\n### Scale-out — adding a second GPU machine\n\nRun this on machine 2, 3, or N. No compose files or full stack installation needed on worker machines — just Docker and the NVIDIA Container Toolkit.\n\n```bash\npip install projectdavid-platform\npdavid worker --join \u003chead-node-ip\u003e\n```\n\nRay discovers the new node automatically and the InferenceReconciler distributes load across all available GPUs.\n\nFull documentation at [docs.projectdavid.co.uk/docs/1_sovereign-forge-cluster](https://docs.projectdavid.co.uk/docs/1_sovereign-forge-cluster).\n\n---\n\n## Docker Images\n\n- [thanosprime/projectdavid-core-api](https://hub.docker.com/r/thanosprime/projectdavid-core-api)\n- [thanosprime/projectdavid-core-sandbox](https://hub.docker.com/r/thanosprime/projectdavid-core-sandbox)\n- [thanosprime/projectdavid-core-inference-worker](https://hub.docker.com/r/thanosprime/projectdavid-core-inference-worker)\n- [thanosprime/projectdavid-core-training-api](https://hub.docker.com/r/thanosprime/projectdavid-core-training-api)\n- [thanosprime/projectdavid-core-training-worker](https://hub.docker.com/r/thanosprime/projectdavid-core-training-worker)\n\nAll images are published automatically on every release of the source repository.\n\n---\n\n## Related Repositories\n\n| Repository | Purpose |\n|---|---|\n| [projectdavid-core](https://github.com/project-david-ai/projectdavid-core) | Source code for the platform runtime |\n| [projectdavid](https://github.com/project-david-ai/projectdavid) | Python SDK — **start here for application development** |\n| [reference-backend](https://github.com/project-david-ai/reference-backend) | Reference backend application |\n| [reference-frontend](https://github.com/project-david-ai/reference-frontend) | Reference frontend application |\n\n---\n\n## Privacy\n\nNo data or telemetry leaves the stack except when you explicitly route to an external inference provider, your assistant calls web search at runtime, one of your tools calls an external API, or you load an image from an external URL.\n\nYour instance is unique, with unique secrets. We cannot see your conversations, data, or secrets.\n\n---\n\n## License\n\nDistributed under the [PolyForm Noncommercial License 1.0.0](https://polyformproject.org/licenses/noncommercial/1.0.0/).\nCommercial licensing available — contact [licensing@projectdavid.co.uk](mailto:licensing@projectdavid.co.uk).\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fproject-david-ai%2Fprojectdavid-platform","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fproject-david-ai%2Fprojectdavid-platform","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fproject-david-ai%2Fprojectdavid-platform/lists"}