{"id":13492266,"url":"https://github.com/docker/genai-stack","last_synced_at":"2025-05-13T17:13:59.048Z","repository":{"id":198125673,"uuid":"691050458","full_name":"docker/genai-stack","owner":"docker","description":"Langchain + Docker + Neo4j + Ollama","archived":false,"fork":false,"pushed_at":"2025-03-11T16:50:06.000Z","size":1950,"stargazers_count":4668,"open_issues_count":84,"forks_count":1048,"subscribers_count":67,"default_branch":"main","last_synced_at":"2025-05-07T13:51:52.723Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"cc0-1.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/docker.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-09-13T12:03:09.000Z","updated_at":"2025-05-06T23:27:43.000Z","dependencies_parsed_at":"2023-11-09T15:30:59.029Z","dependency_job_id":"3ef3a9cb-c9fb-4990-8755-c61c274c5a14","html_url":"https://github.com/docker/genai-stack","commit_stats":{"total_commits":141,"total_committers":28,"mean_commits":5.035714285714286,"dds":0.5460992907801419,"last_synced_commit":"8b01e69f0a52ad3fd5f0ff1cb90a4ad3cbac5ae7"},"previous_names":["docker/genai-stack"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/docker%2Fgenai-stack","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/docker%2Fgenai-stack/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/docker%2Fgenai-stack/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/docker%2Fgenai-stack/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/docker","download_url":"https://codeload.github.com/docker/genai-stack/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253990498,"owners_count":21995776,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T19:01:04.571Z","updated_at":"2025-05-13T17:13:54.037Z","avatar_url":"https://github.com/docker.png","language":"Python","readme":"# GenAI Stack\nThe GenAI Stack will get you started building your own GenAI application in no time.\nThe demo applications can serve as inspiration or as a starting point.\nLearn more about the details in the [introduction blog post](https://neo4j.com/blog/introducing-genai-stack-developers/).\n\n# Configure\n\nCreate a `.env` file from the environment template file `env.example`\n\nAvailable variables:\n| Variable Name          | Default value                      | Description                                                             |\n|------------------------|------------------------------------|-------------------------------------------------------------------------|\n| OLLAMA_BASE_URL        | http://host.docker.internal:11434  | REQUIRED - URL to Ollama LLM API                                        |   \n| NEO4J_URI              | neo4j://database:7687              | REQUIRED - URL to Neo4j database                                        |\n| NEO4J_USERNAME         | neo4j                              | REQUIRED - Username for Neo4j database                                  |\n| NEO4J_PASSWORD         | password                           | REQUIRED - Password for Neo4j database                                  |\n| LLM                    | llama2                             | REQUIRED - Can be any Ollama model tag, or gpt-4 or gpt-3.5 or claudev2 |\n| EMBEDDING_MODEL        | sentence_transformer               | REQUIRED - Can be sentence_transformer, openai, aws, ollama or google-genai-embedding-001|\n| AWS_ACCESS_KEY_ID      |                                    | REQUIRED - Only if LLM=claudev2 or embedding_model=aws                  |\n| AWS_SECRET_ACCESS_KEY  |                                    | REQUIRED - Only if LLM=claudev2 or embedding_model=aws                  |\n| AWS_DEFAULT_REGION     |                                    | REQUIRED - Only if LLM=claudev2 or embedding_model=aws                  |\n| OPENAI_API_KEY         |                                    | REQUIRED - Only if LLM=gpt-4 or LLM=gpt-3.5 or embedding_model=openai   |\n| GOOGLE_API_KEY         |                                    | REQUIRED - Only required when using GoogleGenai LLM or embedding model google-genai-embedding-001|\n| LANGCHAIN_ENDPOINT     | \"https://api.smith.langchain.com\"  | OPTIONAL - URL to Langchain Smith API                                   |\n| LANGCHAIN_TRACING_V2   | false                              | OPTIONAL - Enable Langchain tracing v2                                  |\n| LANGCHAIN_PROJECT      |                                    | OPTIONAL - Langchain project name                                       |\n| LANGCHAIN_API_KEY      |                                    | OPTIONAL - Langchain API key                                            |\n\n## LLM Configuration\nMacOS and Linux users can use any LLM that's available via Ollama. Check the \"tags\" section under the model page you want to use on https://ollama.ai/library and write the tag for the value of the environment variable `LLM=` in the `.env` file.\nAll platforms can use GPT-3.5-turbo and GPT-4 (bring your own API keys for OpenAI models).\n\n**MacOS**\nInstall [Ollama](https://ollama.ai) on MacOS and start it before running `docker compose up` using `ollama serve` in a separate terminal.\n\n**Linux**\nNo need to install Ollama manually, it will run in a container as\npart of the stack when running with the Linux profile: run `docker compose --profile linux up`.\nMake sure to set the `OLLAMA_BASE_URL=http://llm:11434` in the `.env` file when using Ollama docker container.\n\nTo use the Linux-GPU profile: run `docker compose --profile linux-gpu up`. Also change `OLLAMA_BASE_URL=http://llm-gpu:11434` in the `.env` file.\n\n**Windows**\nOllama now supports Windows. Install [Ollama](https://ollama.ai) on Windows and start it before running `docker compose up` using `ollama serve` in a separate terminal. Alternatively, Windows users can generate an OpenAI API key and configure the stack to use `gpt-3.5` or `gpt-4` in the `.env` file.\n# Develop\n\n\u003e [!WARNING]\n\u003e There is a performance issue that impacts python applications in the `4.24.x` releases of Docker Desktop. Please upgrade to the latest release before using this stack.\n\n**To start everything**\n```\ndocker compose up\n```\nIf changes to build scripts have been made, **rebuild**.\n```\ndocker compose up --build\n```\n\nTo enter **watch mode** (auto rebuild on file changes).\nFirst start everything, then in new terminal:\n```\ndocker compose watch\n```\n\n**Shutdown**\nIf health check fails or containers don't start up as expected, shutdown\ncompletely to start up again.\n```\ndocker compose down\n```\n\n# Applications\n\nHere's what's in this repo:\n\n| Name | Main files | Compose name | URLs | Description |\n|---|---|---|---|---|\n| Support Bot | `bot.py` | `bot` | http://localhost:8501 | Main usecase. Fullstack Python application. |\n| Stack Overflow Loader | `loader.py` | `loader` | http://localhost:8502 | Load SO data into the database (create vector embeddings etc). Fullstack Python application. |\n| PDF Reader | `pdf_bot.py` | `pdf_bot` | http://localhost:8503 | Read local PDF and ask it questions. Fullstack Python application. |\n| Standalone Bot API | `api.py` | `api` | http://localhost:8504 | Standalone HTTP API streaming (SSE) + non-streaming endpoints Python. |\n| Standalone Bot UI | `front-end/` | `front-end` | http://localhost:8505 | Standalone client that uses the Standalone Bot API to interact with the model. JavaScript (Svelte) front-end. |\n\nThe database can be explored at http://localhost:7474.\n\n## App 1 - Support Agent Bot\n\nUI: http://localhost:8501\nDB client: http://localhost:7474\n\n- answer support question based on recent entries\n- provide summarized answers with sources\n- demonstrate difference between\n    - RAG Disabled (pure LLM response)\n    - RAG Enabled (vector + knowledge graph context)\n- allow to generate a high quality support ticket for the current conversation based on the style of highly rated questions in the database.\n\n![](.github/media/app1-rag-selector.png)\n*(Chat input + RAG mode selector)*\n\n|  |  |\n|---|---|\n| ![](.github/media/app1-generate.png) | ![](.github/media/app1-ticket.png) |\n| *(CTA to auto generate support ticket draft)* | *(UI of the auto generated support ticket draft)* |\n\n---\n\n##  App 2 - Loader\n\nUI: http://localhost:8502\nDB client: http://localhost:7474\n\n- import recent Stack Overflow data for certain tags into a KG\n- embed questions and answers and store them in vector index\n- UI: choose tags, run import, see progress, some stats of data in the database\n- Load high ranked questions (regardless of tags) to support the ticket generation feature of App 1.\n\n\n\n\n|  |  |\n|---|---|\n| ![](.github/media/app2-ui-1.png) | ![](.github/media/app2-model.png) |\n\n## App 3 Question / Answer with a local PDF\nUI: http://localhost:8503  \nDB client: http://localhost:7474\n\nThis application lets you load a local PDF into text\nchunks and embed it into Neo4j so you can ask questions about\nits contents and have the LLM answer them using vector similarity\nsearch.\n\n![](.github/media/app3-ui.png)\n\n## App 4 Standalone HTTP API\nEndpoints: \n  - http://localhost:8504/query?text=hello\u0026rag=false (non streaming)\n  - http://localhost:8504/query-stream?text=hello\u0026rag=false (SSE streaming)\n\nExample cURL command:\n```bash\ncurl http://localhost:8504/query-stream\\?text\\=minimal%20hello%20world%20in%20python\\\u0026rag\\=false\n```\n\nExposes the functionality to answer questions in the same way as App 1 above. Uses\nsame code and prompts.\n\n## App 5 Static front-end\nUI: http://localhost:8505\n\nThis application has the same features as App 1, but is built separate from\nthe back-end code using modern best practices (Vite, Svelte, Tailwind).  \nThe auto-reload on changes are instant using the Docker watch `sync` config.  \n![](.github/media/app5-ui.png)\n","funding_links":[],"categories":["Python","HarmonyOS","others"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdocker%2Fgenai-stack","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdocker%2Fgenai-stack","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdocker%2Fgenai-stack/lists"}