{"id":18289203,"url":"https://github.com/cmsj/dreambot","last_synced_at":"2026-03-06T06:03:54.400Z","repository":{"id":145773112,"uuid":"531947971","full_name":"cmsj/dreambot","owner":"cmsj","description":"A framework for building chat bots that separate backend workers into separate processes via a message queue","archived":false,"fork":false,"pushed_at":"2025-12-05T00:24:05.000Z","size":351,"stargazers_count":3,"open_issues_count":3,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-12-08T07:41:59.756Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cmsj.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2022-09-02T13:59:41.000Z","updated_at":"2025-11-06T11:43:14.000Z","dependencies_parsed_at":"2024-02-25T23:36:26.041Z","dependency_job_id":"30236c30-2c09-4f64-847a-91c7b619f67b","html_url":"https://github.com/cmsj/dreambot","commit_stats":null,"previous_names":[],"tags_count":91,"template":false,"template_full_name":null,"purl":"pkg:github/cmsj/dreambot","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cmsj%2Fdreambot","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cmsj%2Fdreambot/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cmsj%2Fdreambot/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cmsj%2Fdreambot/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cmsj","download_url":"https://codeload.github.com/cmsj/dreambot/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cmsj%2Fdreambot/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30164532,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-06T04:43:31.446Z","status":"ssl_error","status_checked_at":"2026-03-06T04:40:30.133Z","response_time":250,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-05T14:05:17.695Z","updated_at":"2026-03-06T06:03:54.350Z","avatar_url":"https://github.com/cmsj.png","language":"Python","readme":"![CI Build](https://github.com/cmsj/dreambot/actions/workflows/docker_build.yml/badge.svg)\n\n# Dreambot\n\nby Chris Jones \u003ccmsj@tenshu.net\u003e\n\n## What is it?\n\nDreambot is a distributed architecture for running chat bots.\n\n```mermaid\nflowchart LR\n    subgraph NATS [ NATS JetStream Cluster]\n        C[nats-1]\n        I[nats-2]\n        J[nats-3]\n\n        C \u003c--\u003e I\n        C \u003c--\u003e J\n        I \u003c--\u003e J\n    end\n    subgraph Dreambot\n        B([IRC Frontend])\n        E([GPT Backend])\n        D([Discord Frontend])\n        F([A1111 Backend])\n        M([A1111 Backend])\n        O([ComfyUI Backend])\n        K([Command Backend])\n        NATS\n    end\n    A((Users)) \u003c-. IRC Servers .-\u003e B \u003c--\u003e NATS \u003c--\u003e E \u003c-.-\u003e G{{OpenAI API}}\n    A \u003c-. Discord Servers .-\u003e D \u003c--\u003e NATS \u003c--\u003e F \u003c-.-\u003e H{{A1111 API}} \u003c--\u003e L{{GPU}}\n    NATS \u003c--\u003e M \u003c-.-\u003e N{{A1111 API}} \u003c--\u003e L\n    NATS \u003c--\u003e O \u003c-.-\u003e P{{ComfyUI API}} \u003c--\u003e L\n    NATS \u003c--\u003e K\n\n```\n\n## What are Frontends and Backends?\n\n* Dreambot is designed to run as multiple processes, each of which is either a frontend or a backend.\n* Each of these processes is intended to subscribe to one or more message queues in [NATS](https://nats.io), from which it will receive messages.\n* Frontends listen for trigger keywords from users and then publish them to NATS, where they are picked up by the backends.\n* Backends then perform the requested work and publish the results back to NATS, where they are picked up by the frontends and sent back to the user.\n\n## What are the current Frontends and Backends?\n\nFrontends:\n\n* IRC\n* Discord\n\nBackends:\n\n* [OpenAI](https://www.openai.com)'s GPT [Chat Completions](https://platform.openai.com/docs/api-reference/chat/create)\n* [A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)'s fork of Stable Diffusion\n* [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for Stable Diffusion image generation\n* Commands - simple bot commands, which currently include:\n  * chance:\n    * Usage: `!chance rain tomorrow`\n    * Action: Returns a random percentage chance of `rain tomorrow`\n\n## Why is this architecture so complicated for a simple chat bot?\n\nThe initial prototype was designed to run an IRC bot frontend on a server that is trusted to talk to the Internet, and a Stable Diffusion backend process on a machine that is not trusted to talk to the Internet, with a websocket connection between them.\n\nWhile this prototype worked well, it was not easy to extend to support Discord and additional types of backend. With the current design, each process knows nothing about the other processes, and can be run on any machine that can connect to the NATS cluster.\n\nAdditionally, it can be useful to spread out backends across machines that have relevant GPU hardware, and it's beneficial to have each backend in its own process because AI/ML code tends to have extremely specific requirements for Python and dependency library versions, which may not all be compatible in a single module.\n\n## How do I deploy this?\n\nI deploy all of this using Docker Compose, and here are the approximate steps I use. This all assumes:\n\n* You're running Ubuntu Server 22.04\n* You have an NVIDIA GPU that you want to use for A1111\n* You have ample storage in `/srv/docker/` to attach to the containers (in particular, A1111 will need \u003e10GB for its model cache)\n* You have a web server running that can serve files from `/srv/public/`\n\n### Stages\n\n---\n\n\u003cdetails\u003e\n    \u003csummary\u003eCreate the required directories\u003c/summary\u003e\n\n```bash\nmkdir -p /srv/docker/nats/{config,nats-1,nats-2,nats-3}\nmkdir -p /srv/docker/A1111/data\nmkdir -p /srv/docker/dreambot/config\n```\n\n\u003c/details\u003e\n\n---\n\n\u003cdetails\u003e\u003csummary\u003eCreate the required files\u003c/summary\u003e\n\n---\n\n#### NATS config file\n\n\u003cblockquote\u003e\n\u003cdetails\u003e\u003csummary\u003e/srv/docker/nats/config/nats-server.conf\u003c/summary\u003e\n\n```text\nhost: 0.0.0.0\nport: 4222\nhttp: 0.0.0.0:8222\nmax_payload: 8388608 # Default is 1MB, but we send images over NATS, so bump it to 8MB\n\njetstream {\n    store_dir: /data\n    # 1 GB\n    max_memory_store: 1073741824\n    # 10 GB\n    max_file_store: 10737418240\n}\n\ncluster {\n    name: dreambot\n    host: 0.0.0.0\n    port: 4245\n    routes: [\n        nats-route://nats-1:4245,\n        nats-route://nats-2:4245,\n        nats-route://nats-3:4245\n    ]\n}\n```\n\n\u003c/details\u003e\u003c/blockquote\u003e\n\n---\n\n#### Install A1111\n\nSee their docs for this\n\n---\n\n#### Dreambot config files\n\n\u003cblockquote\u003e\n\u003cdetails\u003e\u003csummary\u003e/srv/docker/dreambot/config/config-frontend-irc.json\u003c/summary\u003e\n\nNotes:\n\n* `uri_base` should be where your webserver has this container's `/data` volume mounted\n\n```json\n{\n  \"triggers\": {\n          \"!gpt\": \"backend.gpt\",\n          \"!dream\": \"backend.a1111\",\n          \"!comfy\": \"backend.comfyui\"\n  },\n  \"nats_uri\": [ \"nats://nats-1:4222\", \"nats://nats-2:4222\", \"nats://nats-3:4222\" ],\n  \"output_dir\": \"/data\",\n  \"uri_base\": \"https://dreams.mydomain.com\",\n  \"irc\": [\n        {\n                \"nickname\": \"dreambot\",\n                \"ident\": \"dreambot\",\n                \"realname\": \"I've dreamed things you people wouldn't believe\",\n                \"host\": \"irc.server.com\",\n                \"port\": 6667,\n                \"ssl\": false,\n                \"channels\": [\n                        \"#dreambot\"\n                ]\n        },\n        {\n                \"nickname\": \"dreambot\",\n                \"ident\": \"dreambot\",\n                \"realname\": \"I've dreamed things you people wouldn't believe\",\n                \"host\": \"irc.something.org\",\n                \"port\": 6667,\n                \"ssl\": false,\n                \"channels\": [\n                        \"#dreambot\",\n                        \"#chat\"\n                ]\n        }\n\n  ]\n}\n```\n\n\u003c/details\u003e\u003c/blockquote\u003e\n\n\u003cblockquote\u003e\n\u003cdetails\u003e\u003csummary\u003e/srv/docker/dreambot/config/config-frontend-discord.json\u003c/summary\u003e\n\nThere is a bunch of Discord developer website stuff you need to do to get the token for this file, and you will need to give it permissions I haven't documented yet\n\n```json\n{\n  \"triggers\": {\n          \"!gpt\": \"backend.gpt\",\n          \"!dream\": \"backend.a1111\",\n          \"!comfy\": \"backend.comfyui\"\n  },\n  \"nats_uri\": [ \"nats://nats-1:4222\", \"nats://nats-2:4222\", \"nats://nats-3:4222\" ],\n  \"output_dir\": \"/data\",\n  \"discord\": {\n    \"token\": \"abc123\"\n  }\n}\n```\n\n\u003c/details\u003e\u003c/blockquote\u003e\n\n\u003cblockquote\u003e\n\u003cdetails\u003e\u003csummary\u003e/srv/docker/dreambot/config/config-backend-gpt.json\u003c/summary\u003e\n\nSign up for a developer account at [https://openai.com](https://openai.com) and you can get your API key and organization ID from there.\n\n```json\n{\n  \"gpt\": {\n      \"api_key\": \"ab-abc123\",\n      \"organization\": \"org-ABC123\",\n      \"model\": \"gpt-3.5-turbo\"\n  },\n  \"nats_uri\": [ \"nats://nats-1:4222\", \"nats://nats-2:4222\", \"nats://nats-3:4222\" ],\n}\n```\n\n\u003c/details\u003e\u003c/blockquote\u003e\n\n\u003cblockquote\u003e\n\u003cdetails\u003e\u003csummary\u003e/srv/docker/dreambot/config/config-backend-a1111.json\u003c/summary\u003e\n\nThe A1111 backend supports arguments for choosing between several models. Install the models you want in A1111 and (optionally, but recommendedly) configure it to keep several models in VRAM at the same time. Play with the settings to get what you want and then configure the settings that will be sent with each request, in the json config:\n\n```json\n{\n  \"a1111\": {\n      \"host\": \"a1111\",\n      \"port\": \"9090\",\n      \"default_model\": \"sdxl\",\n      \"models\": {\n        \"sdxl\": {\n                \"payload\": {\n                  \"hr_upscaler\": \"SwinIR_4x\",\n                  \"sampler_name\": \"Euler a\",\n                  \"seed\": -1,\n                  \"restore_faces\": \"True\",\n                  \"cfg_scale\": 1.0,\n                  \"override_settings\": {\n                      \"sd_model_checkpoint\": \"sd_xl_turbo_1.0_fp16\"\n                  },\n                  \"steps\": 1\n                }\n        },\n        \"real\": {\n           \"payload\": {\n                  \"sampler_name\": \"DPM++ SDE\",\n                  \"scheduler\": \"Karras\",\n                  \"seed\": -1,\n                  \"restore_faces\": \"True\",\n                  \"steps\": 5,\n                  \"cfg_scale\": 1.6,\n                  \"override_settings\": {\n                      \"sd_model_checkpoint\": \"realisticVisionV60B1_v51HyperVAE\"\n                  }\n           }\n        }\n  },\n  \"nats_uri\": [ \"nats://nats-1:4222\", \"nats://nats-2:4222\", \"nats://nats-3:4222\" ],\n}\n```\n\n\u003c/details\u003e\u003c/blockquote\u003e\n\n\u003cblockquote\u003e\n\u003cdetails\u003e\u003csummary\u003e/srv/docker/dreambot/config/config-backend-comfyui.json\u003c/summary\u003e\n\nComfyUI backend uses workflow definitions for image generation. You can export workflows from the ComfyUI web interface and use them in the configuration. The backend will automatically update the text prompt in the workflow.\n\n```json\n{\n  \"comfyui\": {\n      \"host\": \"comfyui\",\n      \"port\": \"8188\",\n      \"default_workflow\": \"txt2img\",\n      \"workflows\": {\n        \"txt2img\": {\n                \"workflow\": {\n                  \"3\": {\n                    \"class_type\": \"KSampler\",\n                    \"inputs\": {\n                      \"seed\": -1,\n                      \"steps\": 20,\n                      \"cfg\": 8.0,\n                      \"sampler_name\": \"euler\",\n                      \"scheduler\": \"normal\",\n                      \"denoise\": 1.0,\n                      \"model\": [\"4\", 0],\n                      \"positive\": [\"6\", 0],\n                      \"negative\": [\"7\", 0],\n                      \"latent_image\": [\"5\", 0]\n                    }\n                  },\n                  \"4\": {\n                    \"class_type\": \"CheckpointLoaderSimple\",\n                    \"inputs\": {\n                      \"ckpt_name\": \"sd_xl_base_1.0.safetensors\"\n                    }\n                  },\n                  \"5\": {\n                    \"class_type\": \"EmptyLatentImage\",\n                    \"inputs\": {\n                      \"width\": 512,\n                      \"height\": 512,\n                      \"batch_size\": 1\n                    }\n                  },\n                  \"6\": {\n                    \"class_type\": \"CLIPTextEncode\",\n                    \"inputs\": {\n                      \"text\": \"beautiful scenery\",\n                      \"clip\": [\"4\", 1]\n                    }\n                  },\n                  \"7\": {\n                    \"class_type\": \"CLIPTextEncode\",\n                    \"inputs\": {\n                      \"text\": \"text, watermark\",\n                      \"clip\": [\"4\", 1]\n                    }\n                  },\n                  \"8\": {\n                    \"class_type\": \"VAEDecode\",\n                    \"inputs\": {\n                      \"samples\": [\"3\", 0],\n                      \"vae\": [\"4\", 2]\n                    }\n                  },\n                  \"9\": {\n                    \"class_type\": \"SaveImage\",\n                    \"inputs\": {\n                      \"filename_prefix\": \"ComfyUI\",\n                      \"images\": [\"8\", 0]\n                    }\n                  }\n                }\n        }\n      }\n  },\n  \"nats_uri\": [ \"nats://nats-1:4222\", \"nats://nats-2:4222\", \"nats://nats-3:4222\" ]\n}\n```\n\n\u003c/details\u003e\u003c/blockquote\u003e\n\n\u003cblockquote\u003e\n\u003cdetails\u003e\u003csummary\u003e/srv/docker/dreambot/config/config-backend-commands.json\u003c/summary\u003e\n\n```json\n{\n  \"nats_uri\": [ \"nats://nats-1:4222\", \"nats://nats-2:4222\", \"nats://nats-3:4222\" ],\n  \"nats_queue_name\": \"!commands\"\n}\n```\n\u003c/details\u003e\n\u003c/blockquote\u003e\n\n\u003c/details\u003e\n\n---\n\u003cdetails\u003e\u003csummary\u003eAnsible playbook to install/configure CUDA (required for A1111 backend)\u003c/summary\u003e\n\nTo run a container that accesses GPUs for CUDA, you need to install nvidia drivers, the nvidia-container-toolkit package, and then configure the docker daemon to use it as a runtime.\n\n```yaml\n- name: Install cuda keyring\nansible.builtin.apt:\n    deb: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb\n    state: present\n\n- name: Install nvidia/cuda packages\nansible.builtin.apt:\n    name: \"{{ item }}\"\n    update_cache: yes\nwith_items:\n    - nvidia-headless-530\n    - nvidia-utils-530\n    - cuda-toolkit\n\n# Based on: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker\n- name: Install nvidia-container-toolkit apt key\nansible.builtin.shell:\n    cmd: curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg\n    creates: /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg\n\n- name: Add the nvidia-container-toolkit apt repo\nansible.builtin.shell:\n    cmd: curl -s -L https://nvidia.github.io/libnvidia-container/ubuntu22.04/libnvidia-container.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | tee /etc/apt/sources.list.d/nvidia-container-toolkit.list\n    creates: /etc/apt/sources.list.d/nvidia-container-toolkit.list\n\n- name: Install the nvidia-container-toolkit package\nansible.builtin.apt:\n    name: nvidia-container-toolkit\n    update_cache: yes\n\n- name: Configure nvidia-container-toolkit runtime\nansible.builtin.shell:\n    cmd: nvidia-ctk runtime configure --runtime=docker\n```\n\nThen restart the docker daemon, and you should be able to run containers that access the GPUs.\n\u003c/details\u003e\n\n---\n\n\u003cdetails\u003e\n    \u003csummary\u003eDocker Compose for deploying the Dreambot infrastructure\u003c/summary\u003e\n\n```yaml\nnetworks:\n    dreambot:\n        external: false\n        driver: bridge\n\nservices:\n    # Deploy a NATS cluster\n    nats-1:\n        hostname: nats-1\n        image: nats\n        restart: unless-stopped\n        networks:\n            - dreambot\n        expose:\n            - \"8222\"\n        volumes:\n            - /srv/docker/nats/nats-1:/data\n            - /srv/docker/nats/config:/config\n        entrypoint: /nats-server\n        command: --name nats-1 -c /config/nats-server.conf\n\n    nats-2:\n        hostname: nats-2\n        image: nats\n        restart: unless-stopped\n        networks:\n            - dreambot\n        expose:\n            - \"8222\"\n        volumes:\n            - /srv/docker/nats/nats-2:/data\n            - /srv/docker/nats/config:/config\n        entrypoint: /nats-server\n        command: --name nats-2 -c /config/nats-server.conf\n\n    nats-3:\n        hostname: nats-3\n        image: nats\n        restart: unless-stopped\n        networks:\n            - dreambot\n        expose:\n            - \"8222\"\n        volumes:\n            - /srv/docker/nats/nats-3:/data\n            - /srv/docker/nats/config:/config\n        entrypoint: /nats-server\n        command: --name nats-3 -c /config/nats-server.conf\n\n    # Deploy A1111 with access to our GPU\n    a1111:\n        hostname: a1111\n        image: ghcr.io/neggles/sd-webui-docker:latest\n        restart: unless-stopped\n        deploy:\n        resources:\n            reservations:\n            devices:\n                - driver: nvidia\n                count: 1\n                capabilities: [gpu]\n        networks:\n            - dreambot\n        environment:\n            CLI_ARGS: \"--skip-version-check --allow-code --enable-insecure-extension-access --api --xformers --opt-channelslast\"\n            SD_WEBUI_VARIANT: \"default\"\n            # make TQDM behave a little better\n            PYTHONUNBUFFERED: \"1\"\n            TERM: \"vt100\"\n        expose:\n            - \"9090\"\n        volumes:\n            - /srv/docker/a1111/data:/data\n            - /srv/public/outputs:/outputs # This is where the outputs of A1111 will be stored if you talk to it directly rather than through Dreambot (e.g. their web interface)\n\n    # Deploy the Dreambot Frontends\n    dreambot-frontend-irc:\n        hostname: dreambot-frontend-irc\n        image: ghcr.io/cmsj/dreambot:latest\n        restart: unless-stopped\n        networks:\n            - dreambot\n        volumes:\n            - /srv/docker/dreambot/config:/config\n            - /srv/public/dreams:/data\n        command: dreambot_frontend_irc -c /config/config-frontend-irc.json\n\n    dreambot-frontend-discord:\n        hostname: dreambot-frontend-discord\n        image: ghcr.io/cmsj/dreambot:latest\n        restart: unless-stopped\n        networks:\n            - dreambot\n        volumes:\n            - /srv/docker/dreambot/config:/config\n            - /srv/public/dreams:/data\n        command: dreambot_frontend_discord -c /config/config-frontend-discord.json\n\n    # Deploy the Dreambot Backends\n    dreambot-backend-gpt:\n        hostname: dreambot_backend_gpt\n        image: ghcr.io/cmsj/dreambot:latest\n        restart: unless-stopped\n        networks:\n            - dreambot\n        volumes:\n            - /srv/docker/dreambot/config:/config\n            - /srv/public/dreams:/data\n        command: dreambot_backend_gpt -c /config/config-backend-gpt.json\n\n    dreambot-backend-a1111:\n        hostname: dreambot_backend_a1111\n        image: ghcr.io/cmsj/dreambot:latest\n        restart: unless-stopped\n        networks:\n            - dreambot\n        volumes:\n            - /srv/docker/dreambot/config:/config\n            - /srv/public/dreams:/data\n        command: dreambot_backend_a1111 -c /config/config-backend-a1111.json\n\n```\n\n\u003c/details\u003e\n\n---\n\n## That's a lot of setup, holy cow\n\nYes, it is.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcmsj%2Fdreambot","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcmsj%2Fdreambot","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcmsj%2Fdreambot/lists"}