{"id":48683359,"url":"https://github.com/tigergraph/graphrag","last_synced_at":"2026-04-11T03:35:21.868Z","repository":{"id":309454443,"uuid":"996277185","full_name":"tigergraph/graphrag","owner":"tigergraph","description":null,"archived":false,"fork":false,"pushed_at":"2026-04-11T01:23:06.000Z","size":24709,"stargazers_count":11,"open_issues_count":1,"forks_count":6,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-11T03:22:18.436Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tigergraph.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"docs/Contributing.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-06-04T18:02:52.000Z","updated_at":"2026-04-11T01:22:29.000Z","dependencies_parsed_at":"2025-10-16T11:39:37.596Z","dependency_job_id":"5c344e4d-8ef7-4182-9677-8dc4d73dd1c0","html_url":"https://github.com/tigergraph/graphrag","commit_stats":null,"previous_names":["tigergraph/graphrag"],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/tigergraph/graphrag","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tigergraph%2Fgraphrag","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tigergraph%2Fgraphrag/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tigergraph%2Fgraphrag/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tigergraph%2Fgraphrag/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tigergraph","download_url":"https://codeload.github.com/tigergraph/graphrag/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tigergraph%2Fgraphrag/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31668049,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-10T17:19:37.612Z","status":"online","status_checked_at":"2026-04-11T02:00:05.776Z","response_time":54,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-04-11T03:35:21.090Z","updated_at":"2026-04-11T03:35:21.854Z","avatar_url":"https://github.com/tigergraph.png","language":"Python","readme":"# TigerGraph GraphRAG\n\n\u003e ⚠️ **Disclaimer**  \n\u003e - **Supported Backend:** TigerGraph is the only Vector and Graph DB supported in this project. Hybrid Search is the officially retriever method supported at backend.  \n\u003e - **Limitations:** No official support is provided unless delivered through a Statement of Work (SOW) with the Solutions team. Customizations are customer-owned self-service to handle custom LLM service, prompt logic, UI integration, and pipeline orchestration. This project is provided \"as is\" without any warranties or guarantees.\n\n## Table of Contents\n\n- [Releases](#releases)\n- [Overview](#overview)\n  - [Nature Language Query](#nature-language-query)\n  - [Knowledge Graph Query](#knowledge-graph-query)\n- [Getting Started](#getting-started)\n  - [Prerequisites](#prerequisites)\n  - [Quick Start](#quick-start)\n    - [Use TigerGraph Docker-Based Instance](#use-tigergraph-docker-based-instance)\n    - [Use Pre-Installed TigerGraph Instance](#use-pre-installed-tigergraph-instance)\n  - [Deploy GraphRAG Manually](#deploy-graphrag-manually)\n    - [Manual Deploy of GraphRAG with Docker Compose](#manual-deploy-of-graphrag-with-docker-compose)\n    - [Use Standalone TigerGraph instance (If preferred)](#use-standalone-tigergraph-instance-if-preferred)\n    - [Manual Deploy of GraphRAG with Kubernetes](#manual-deploy-of-graphrag-with-kubernetes)\n- [Use TigerGraph GraphRAG](#use-tigergraph-graphrag)\n  - [Run Demo with Preloaded GraphRAG](#run-demo-with-preloaded-graphrag)\n  - [Manually Build GraphRAG From Scratch](#manually-build-graphrag-from-scratch)\n- [Document Ingestion for Knowledge Graph](#document-ingestion-for-knowledge-graph)\n  - [Ingest Documents from the UI](#ingest-documents-from-the-ui)\n    - [Local File Upload](#local-file-upload)\n    - [Download from Cloud](#download-from-cloud)\n    - [Use Amazon BDA](#use-amazon-bda)\n  - [Ingest Documents via API](#ingest-documents-via-api)\n- [More Detailed Configurations](#more-detailed-configurations)\n  - [DB configuration](#db-configuration)\n  - [GraphRAG configuration](#graphrag-configuration)\n  - [Chat History Configuration](#chat-history-configuration)\n  - [LLM provider configuration](#llm-provider-configuration)\n    - [Supported parameters](#supported-parameters)\n    - [Provider examples](#provider-examples)\n    - [OpenAI](#openai)\n    - [Google GenAI](#google-genai)\n    - [GCP VertexAI](#gcp-vertexai)\n    - [Azure](#azure)\n    - [AWS Bedrock](#aws-bedrock)\n    - [Ollama](#ollama)\n    - [Hugging Face](#hugging-face)\n    - [Groq](#groq)\n- [Customization and Extensibility](#customization-and-extensibility)\n  - [Test Your Code Changes](#test-your-code-changes)\n    - [Testing with Pytest](#testing-with-pytest)\n    - [Test Code Change in Docker Container](#test-code-change-in-docker-container)\n  - [Test Script Options](#test-script-options)\n    - [Configure LLM Service](#configure-llm-service)\n    - [Configure Testing Graphs](#configure-testing-graphs)\n    - [Configure Weights and Biases](#configure-weights-and-biases)\n\n---\n\n## Releases\n* **2/28/2026**: GraphRAG v1.2.0 released. Added Admin UI for graph initialization, document ingestion, and knowledge graph rebuild, along with many other improvements and bug fixes. See [release notes](https://github.com/tigergraph/graphrag/releases/tag/v1.2.0) for details.\n* **9/22/2025**: GraphRAG is available now officially v1.1 (v1.1.0). AWS Bedrock support is completed with BDA integration for multimodal document ingestion. See [release notes](https://github.com/tigergraph/graphrag/releases/tag/v1.1.0) for details.\n* **6/18/2025**: GraphRAG is available now officially v1.0 (v1.0.0). TigerGraph database is the only graph and vector storagge supported.\nPlease see [Release Notes](https://docs.tigergraph.com/tg-graphrag/current/release-notes/) for details.\n\n---\n\n## Overview\n\n![GraphRAG Overview](./docs/img/TG-GraphRAG-Overview.png)\n\nTigerGraph GraphRAG is an AI assistant that is meticulously designed to combine the powers of vector store, graph databases and generative AI to draw the most value from data and to enhance productivity across various business functions, including analytics, development, and administration tasks. It is one AI assistant with two core component services:\n* A natural language assistant for Q\u0026A with graph-powered solutions\n* A knowledge graph builder for managing documents and graphs\n\nYou can interact with GraphRAG through the built-in chat interface and APIs. For now, your own LLM services (from OpenAI, Azure, GCP, AWS Bedrock, Ollama, Hugging Face and Groq.) are required to use GraphRAG, but in future releases you can use TigerGraph’s LLMs.\n\n### Nature Language Query\n![Nature Language Query](./docs/img/NatureLanguageQuery-Architecture.png)\n\nWhen a question is posed in natural language, GraphRAG employs a novel three-phase interaction with both the TigerGraph database and a LLM of the user's choice, to obtain accurate and relevant responses.\n\nThe first phase aligns the question with the particular data available in the database. GraphRAG uses the LLM to compare the question with the graph’s schema and replace entities in the question by graph elements. For example, if there is a vertex type of `BareMetalNode` and the user asks `How many servers are there?`, the question will be translated to `How many BareMetalNode vertices are there?`. In the second phase, GraphRAG uses the LLM to compare the transformed question with a set of curated database queries and functions in order to select the best match. In the third phase, GraphRAG executes the identified query and returns the result in natural language along with the reasoning behind the actions.\n\nUsing pre-approved queries provides multiple benefits. First and foremost, it reduces the likelihood of hallucinations, because the meaning and behavior of each query has been validated.  Second, the system has the potential of predicting the execution resources needed to answer the question.\n\n### Knowledge Graph Query\n![Knowledge Graph Query](./docs/img/GraphRAG-Architecture.png)\n\nFor inquiries cannot be answered with structured graph data, GraphRAG employs an AI chatbots with graph-augmented Knowledge Graph based on a user's own documents or text data. It builds a knowledge graph from source material and applies its unique variant of knowledge graph-based RAG (Retrieval Augmented Generation) to improve the contextual relevance and accuracy of answers to natural-language questions.\n\nGraphRAG will also identify concepts and build an ontology, to add semantics and reasoning to the knowledge graph, or users can provide their own concept ontology. Then, with this comprehensive knowledge graph, GraphRAG performs hybrid retrievals, combining traditional vector search and graph traversals, to collect more relevant information and richer context to answer users’ knowledge questions.\n\nOrganizing the data as a knowledge graph allows a chatbot to access accurate, fact-based information quickly and efficiently, thereby reducing the reliance on generating responses from patterns learned during training, which can sometimes be incorrect or out of date.\n\n[Go back to top](#top)\n\n---\n\n## Getting Started\n\n### Prerequisites\n* Docker + Docker Compose Plugin, or Kubernetes\n* TigerGraph DB 4.2+.\n* API key of your LLM provider. (An LLM provider refers to a company or organization that offers Large Language Models (LLMs) as a service. The API key verifies the identity of the requester, ensuring that the request is coming from a registered and authorized user or application.) Currently, GraphRAG supports the following LLM providers: OpenAI, Azure OpenAI, GCP, AWS Bedrock.\n\n\n### Quick Start\n\n#### Use TigerGraph Docker-Based Instance\nSet your LLM Provider (supported `openai` or `gemini`) api key as environment variable LLM_API_KEY and use the following command for a one-step quick deployment with TigerGraph Community Edition and default configurations:\n```\ncurl -k https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/setup_graphrag.sh | bash\n```\n\nThe GraphRAG instances will be deployed at `./graphrag` folder and TigerGraph instance will be available at `http://localhost:14240`.\nTo change installation folder, use `bash -s -- \u003cgraphrag_folder\u003e \u003cllm_provider\u003e` instead of `bash` at the end of the above command.\n\n\u003e Note: for other LLM providers, manually update `configs/server_config.json` accordingly and re-run `docker compose up -d`\n\n#### Use Pre-Installed TigerGraph Instance\nSimilar to the above setup, and use the following command for a one-step quick deployment connecting to a pre-installed TigerGraph with default configurations:\n```\ncurl -k https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/setup_graphrag_tg.sh | bash\n```\n\nThe GraphRAG instances will be deployed at `./graphrag` folder and connect to TigerGraph instance at `http://localhost:14240` by default.\nTo change installation folder, TigerGraph instance location or username/password, use `bash -s -- \u003cgraphrag_folder\u003e \u003cllm_provider\u003e \u003ctg_host\u003e \u003ctg_port\u003e \u003ctg_username\u003e \u003ctg_password\u003e` instead of `bash` at the end of the above command.\n\n[Go back to top](#top)\n\n\n### Deploy GraphRAG Manually\nThe GraphRAG services can be deployed manually using Docker Compose or Kubernetes with updated configurations for different use cases.\n\n#### Manual Deploy of GraphRAG with Docker Compose\n\n##### Step 1: Get docker-compose file\nDownload the [docker-compose.yml](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/docker-compose.yml) file directly\n\nThe Docker Compose file contains all dependencies for GraphRAG including a TigerGraph database. If you want to use a separate TigerGraph instance, you can comment out the `tigergraph` section from the docker compose file and restart all services. However, please follow the instructions below to make sure your standalone TigerGraph server is accessible from other GraphRAG containers.\n\n##### Step 2: Set up configurations\n\nNext, download the following configuration files and put them in a `configs` subdirectory of the directory contains the Docker Compose file:\n* [configs/server_config.json](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/configs/server_config.json)\n* [configs/nginx.conf](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/configs/nginx.conf)\n\nHere’s what the folder structure looks like:\n```\n    graphrag\n    ├── configs\n    │   ├── nginx.conf\n    │   └── server_config.json\n    └── docker-compose.yml\n```\n\n##### Step 3: Adjust configurations\n\nEdit `llm_config` section of `configs/server_config.json` and replace `\u003cYOUR_LLM_API_KEY\u003e` to your own LLM_API_KEY for the LLM provider. \n \n\u003e If desired, you can also change the model to be used for the embedding service and completion service to your preferred models to adjust the output from the LLM service.\n\n##### Step 4: Configure Logging Level in Dockerfile (Optional)\n\nTo configure the logging level of the service, edit the Docker Compose file.\n\n**By default, the logging level is set to \"INFO\".**\n\n```console\nENV LOGLEVEL=\"INFO\"\n```\n\nThis line can be changed to support different logging levels.\n\n**The levels are described below:**\n\n| Level | Description |\n| --- | --- |\n| `CRITICAL` | A serious error. |\n| `ERROR` | Failing to perform functions. |\n| `WARNING` | Indication of unexpected problems, e.g. failure to map a user’s question to the graph schema. |\n| `INFO` | Confirming that the service is performing as expected. |\n| `DEBUG` | Detailed information, e.g. the functions retrieved during the `GenerateFunction` step, etc. |\n| `DEBUG_PII` | Finer-grained information that could potentially include `PII`, such as a user’s question, the complete function call (with parameters), and the LLM’s natural language response. |\n| NOTSET | All messages are processed. |\n\n##### Step 5: Start all services\n\nNow, simply run `docker compose up -d` and wait for all the services to start.\n\n\u003e Note: `graphrag` container will be down if TigerGraph service is not ready. Log into the `tigergraph` container, bring up tigergraph services and rerun `docker compose up -d` should resolve the issue.\n\n##### Step 6: Stop all services (when needed)\n\nRun command `docker compose down` and wait for all the service containers to stopped and removed.\n\n[Go back to top](#top)\n\n#### Use Standalone TigerGraph instance (If preferred)\n\n\u003e **_Note:_** Vector feature is available in both TigerGraph Community Edition 4.2.0+ and Enterprise Edition 4.2.0+.\n\nIf you prefer to start a TigerGraph Community Edition instance without a license key, please make sure the container can be accessed from the GraphRAG containers by add `--network graphrag_default`:\n```\ndocker run -d -p 14240:14240 --name tigergraph --ulimit nofile=1000000:1000000 --init --network graphrag_default -t tigergraph/community:4.2.2\n```\n\n\u003e Use **tigergraph/tigergraph:4.2.2** if Enterprise Edition is preferred.\n\u003e Setting up **DNS** or `/etc/hosts` properly is an alternative solution to ensure contains can connect to each other.\n\u003e Or modify`hostname` in `db_config` section of `configs/server_config.json` and replace `http://tigergraph` to your tigergraph container IP address, e.g., `http://172.19.0.2`. \n\nCheck the service status with the following commands:\n```\ndocker exec -it tigergraph /bin/bash\ngadmin status\ngadmin start all\n```\n\nAfter using the database, and you want to shutdown it, use the following shell commmand\n```\ngadmin stop all\n```\n\n[Go back to top](#top)\n\n\n#### Manual Deploy of GraphRAG with Kubernetes\n\n##### Step 1: Get kubernetes deployment file\n  Download the [graphrag-k8s.yml](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/graphrag-k8s.yml) file directly\n\n##### Step 2: Modify `graphrag-k8s.yml` (Optional)\n  Remove the sections for tigergraph instance if you're using a standalone TigerGraph instance instead\n\n##### Step 3: Set up server configurations\n  Next, in the same directory as the Kubernetes deployment file is in, create a `configs` directory and download the following configuration files:\n  * [configs/server_config.json](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/configs/server_config.json)\n\n  Update the TigerGraph database information, LLM API keys and other configs accordingly.\n\n##### Step 4: Install Nginx Ingress (Optional)\n  If Nginx Ingress is not installed yet, it can be installed using `kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml`\n\n##### Step 5: Start all services\n  Replace `/path/to/graphrag/configs` with the absolute path of the `configs` folder inside `graphrag-k8s.yml`, and update the TigerGraph database information and other configs accordingly.\n\n  Now, simply run `kubectl apply -f graphrag-k8s.yml` and wait for all the services to start.\n\n##### Step 6: Stop all services (Optional)\n  Run kubectl delete -f graphrag-k8s.yml and wait for all the services in the deployment to be deleted.\n\n\u003e Note: Nginx Ingress should be deleted using kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml if port 80 needs to be released\n\n[Go back to top](#top)\n\n---\n\n## Use TigerGraph GraphRAG\n\nGraphRAG is friendly to both technical and non-technical users. There is a graphical chat interface as well as API access to GraphRAG. Function-wise, GraphRAG can answer your questions by calling existing queries in the database, build a knowledge graph from your documents, and answer knowledge questions based on your documents.\n\n### Run Demo with Preloaded GraphRAG\n\nThe pre-loaded knowledge graph `TigerGraphRAG` is provided for an express access to the GraphRAG features.\n\n#### Step 1: Get data package\n\nDownload the following data file and put it under `/home/tigergraph/graphrag/` inside your TigerGraph container:\n* [ExportedGraph.zip](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/data/ExportedGraph.zip)\n\nUse the following commands if the file cannot be downloaded inside the TigerGraph container directly:\n```\ndocker exec -it tigergraph mkdir -p /home/tigergraph/graphrag\ndocker exec -it tigergraph curl -kL https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/data/ExportedGraph.zip -o /home/tigergraph/graphrag/ExportedGraph.zip\n```\n\n\u003e Note: command should be changed to equivalent formats if standalone TigerGraph instance is used\n\n#### Step 2: Import data package\nNext, log onto the TigerGraph instance and make use of the Database Import feature to recreate the GraphRAG:\n```\ndocker exec -it tigergraph /bin/bash\ngsql \"import graph all from \\\"/home/tigergraph/graphrag\\\"\"\ngsql \"install query all\"\n```\n\nWait until the following output is given:\n```\n[======================================================================================================] 100% (26/26)\nQuery installation finished.\n```\n\n#### Step 3: Access Chatbot UI\nOpen your browser to access `http://localhost:\u003cnginx_port\u003e` to access GraphRAG Chat. For example: http://localhost:80\n\nEnter the username and password of the TigerGraph database to login.\n\n![Chat Login](./docs/img/ChatLogin.jpg)\n\nOn the top of the page, select `Community Search` as RAG pattern and `TigerGraphRAG` as Graph.\n![RAG Config](./docs/img/RAGConfig.jpg)\n\nIn the chat box, input the question `how to load data to tigergraph vector store, give an example in Python` and click the `send` button.\n![Demo Question](./docs/img/DemoQuestion.jpg)\n\nYou can also ask other questions on statistics and data inside the TigerGraph database.\n![Data          ](./docs/img/Inquiry.jpg)\n\n[Go back to top](#top)\n\n\n### Manually Build GraphRAG From Scratch\n\nIf you want to experience the whole process of GraphRAG, you can build the GraphRAG from scratch. However, please review the LLM model and service setting carefully because it will cost some money to re-generate embedding and data structure for the raw data.\n\n#### Step 1: Get demo script\n\nThe following scripts are needed to run the demo. Please download and put them in the same directory `./graphrag` as the Docker Compose file:\n* Demo driver: [graphrag_demo.sh](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/graphrag_demo.sh)\n* GraphRAG initializer: [init_graphrag.py](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/init_graphrag.py)\n* Example: [answer_question.py](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/tutorials/answer_question.py)\n\n#### Step 2: Download the demo data\n\nNext, download the following data file and put it in a `data` subdirectory of the directory contains the Docker Compose file:\n* [data/tg_tutorials.jsonl](https://raw.githubusercontent.com/tigergraph/graphrag/refs/heads/main/docs/data/tg_tutorials.jsonl)\n\n#### Step 3: Run the demo driver script\n\n\u003e Note: Python 3.11+ is needed to run the demo\n\nIt is recommended to use a virtual env to isolate the runtime environment for the demo\n```\npython3.11 -m venv demo\nsource demo/bin/activate\n```\n\nNow, simply run the demo script to try GraphRAG.\n```\n  ./graphrag_demo.sh\n```\n\nThe script will:\n1. Check the environment\n1. Init TigerGraph schema and related queries needed\n1. Load the sample data\n1. Init the GraphRAG based on the graph and install required queries\n1. Ask a question via Python to get answer from GraphRAG\n\n[Go back to top](#top)\n\n---\n\n## Document Ingestion for Knowledge Graph\n\nDocuments can be ingested into the knowledge graph either through the UI Admin page or manually via backend APIs.\n\n\u003e **Import Note**: Knowledge Graph needs to be initialized before document ingestion and should be refreshed after document ingestion to update graph content\n\n![Document Processing Workflow](./docs/img/IngestionWorkflow.png)\n\n### Ingest Documents from the UI\n\nYou can upload local files, download files from cloud storage, or use **Amazon Bedrock Data Automation (BDA)** as an external pre-processor for document ingestion.\n\n#### Local File Upload\n\nLocal file ingestion follows a two-step process:\n\n1. **Upload local files to the server**  \n   Files are first uploaded to the GraphRAG server for pre-processing.  \n   - Multimodal files (e.g., PDFs) are converted into text along with extracted images.  \n   - Each image receives a generated description and a reference inside the converted text file.  \n   - Uploaded files may be manually deleted before ingestion if they are no longer needed.\n\n2. **Ingest files into your knowledge graph**  \n   The pre-processed documents are loaded into the graph database as vertices using a dedicated ingestion job.\n\n![Upload Files](./docs/img/LocalFileUpload.png)\n\n#### Download from Cloud\n\nCloud ingestion works similarly to local uploads and also follows a two-step process:\n\n1. **Download files from cloud storage**  \n   Instead of selecting local files, you can connect to a cloud provider (S3, GCS, Azure) using the appropriate credentials.  \n   - Files are downloaded to the GraphRAG server for pre-processing.  \n   - Multimodal files (e.g., PDFs) are converted to text with extracted images, each with descriptive references.  \n   - Downloaded files can be manually deleted before ingestion if no longer needed.\n\n2. **Ingest files into your knowledge graph**  \n   After pre-processing, the documents are loaded into the graph database as vertices via a dedicated ingestion job.\n\n![Download from Cloud](./docs/img/DownloadFromCloud.png)\n\n#### Use Amazon BDA\n\nYou may choose **Amazon Bedrock Data Automation (BDA)** as the external document pre-processor instead of the built-in GraphRAG processor.  \n- Amazon BDA processes multimodal documents stored in an S3 bucket.  \n- It writes the converted outputs to a separate S3 bucket.  \n- These processed documents can then be ingested directly into your knowledge graph.  \n- This method is a **single-step ingestion workflow** since pre-processing is completed by BDA.\n\n![Use Amazon BDA](./docs/img/UseAmazonBDA.png)\n\n### Ingest Documents via API\n\nFor examples of how to ingest documents through the backend API, refer to the **[GraphRAG Demo Notebook](./docs/notebooks/GraphRAGDemo.ipynb)**.\n\n\n[Go back to top](#top)\n\n---\n\n## More Detailed Configurations\n\n### DB configuration\nCopy the below into `configs/server_config.json` and edit the `hostname` and `getToken` fields to match your database's configuration. If token authentication is enabled in TigerGraph, set `getToken` to `true`. Set the timeout, memory threshold, and thread limit parameters as desired to control how much of the database's resources are consumed when answering a question.\n\n```json\n{\n    \"db_config\": {\n        \"hostname\": \"http://tigergraph\",\n        \"restppPort\": \"9000\",\n        \"gsPort\": \"14240\",\n        \"username\": \"tigergraph\",\n        \"password\": \"tigergraph\",\n        \"getToken\": false,\n        \"default_timeout\": 300,\n        \"default_mem_threshold\": 5000,\n        \"default_thread_limit\": 8\n    }\n}\n```\n\n| Parameter | Type | Default | Description |\n| --- | --- | --- | --- |\n| `hostname` | string | `\"http://tigergraph\"` | TigerGraph server URL. |\n| `restppPort` | string | `\"9000\"` | RESTPP port for TigerGraph API requests. |\n| `gsPort` | string | `\"14240\"` | GSQL port for TigerGraph admin operations. |\n| `username` | string | `\"tigergraph\"` | TigerGraph database username. |\n| `password` | string | `\"tigergraph\"` | TigerGraph database password. |\n| `getToken` | bool | `false` | Set to `true` if token authentication is enabled on TigerGraph. |\n| `graphname` | string | `\"\"` | Default graph name. Usually left empty (selected at runtime). |\n| `apiToken` | string | `\"\"` | Pre-generated API token. If set, token-based auth is used instead of username/password. |\n| `default_timeout` | int | `300` | Default query timeout in seconds. |\n| `default_mem_threshold` | int | `5000` | Memory threshold (MB) for query execution. |\n| `default_thread_limit` | int | `8` | Max threads for query execution. |\n\n### GraphRAG configuration\nCopy the below code into `configs/server_config.json`. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.\n\n```json\n{\n    \"graphrag_config\": {\n        \"reuse_embedding\": false,\n        \"ecc\": \"http://graphrag-ecc:8001\",\n        \"chat_history_api\": \"http://chat-history:8002\",\n        \"chunker\": \"semantic\",\n        \"extractor\": \"llm\",\n        \"top_k\": 5,\n        \"num_hops\": 2\n    }\n}\n```\n\n| Parameter | Type | Default | Description |\n| --- | --- | --- | --- |\n| `reuse_embedding` | bool | `true` | Reuse existing embeddings instead of regenerating them. |\n| `ecc` | string | `\"http://graphrag-ecc:8001\"` | URL of the knowledge graph build service. No change needed when using the provided Docker Compose file. |\n| `chat_history_api` | string | `\"http://chat-history:8002\"` | URL of the chat history service. No change needed when using the provided Docker Compose file. |\n| `chunker` | string | `\"semantic\"` | Default document chunker. Options: `semantic`, `character`, `regex`, `markdown`, `html`, `recursive`. |\n| `extractor` | string | `\"llm\"` | Entity extraction method. Options: `llm`, `graphrag`. |\n| `chunker_config` | object | `{}` | Chunker-specific settings (see sub-parameters below). All settings are saved regardless of which chunker is selected as default. |\n| ↳ `chunk_size` | int | `2048` | Maximum number of characters per chunk. Used by `character`, `markdown`, `html`, and `recursive` chunkers. Larger values produce fewer, bigger chunks; smaller values produce more, finer-grained chunks. |\n| ↳ `overlap_size` | int | 1/8 of `chunk_size` | Number of overlapping characters between consecutive chunks. Used by `character`, `markdown`, `html`, and `recursive` chunkers. More overlap preserves cross-chunk context but increases total chunk count. Set to `0` for no overlap. |\n| ↳ `method` | string | `\"percentile\"` | Breakpoint detection method for the `semantic` chunker. Options: `percentile`, `standard_deviation`, `interquartile`, `gradient`. Controls how the chunker decides where to split based on embedding similarity. |\n| ↳ `threshold` | float | `0.95` | Similarity threshold for the `semantic` chunker. Higher values produce more splits (smaller chunks); lower values produce fewer splits (larger chunks). |\n| ↳ `pattern` | string | `\"\"` | Regular expression pattern for the `regex` chunker. The document is split at each match of this pattern. |\n| `top_k` | int | `5` | Number of initial seed results to retrieve per search. Also caps the final scored results. Increasing `top_k` increases the overall context size sent to the LLM. |\n| `num_hops` | int | `2` | Number of graph hops to traverse from seed nodes during hybrid search. More hops expand the result set with related context. |\n| `num_seen_min` | int | `2` | Minimum occurrence count for a node to be included during hybrid search traversal. Higher values filter out loosely connected nodes, reducing context size. |\n| `community_level` | int | `2` | Community hierarchy level for community search. Higher levels retrieve broader, higher-order community summaries. |\n| `chunk_only` | bool | `true` | If true, hybrid search only retrieves document chunks, excluding entity data. |\n| `doc_only` | bool | `false` | If true, hybrid search retrieves whole documents instead of chunks. Significantly increases context size. |\n| `with_chunk` | bool | `true` | If true, community search also includes document chunks alongside community summaries. Increases context size. |\n| `doc_process_switch` | bool | `true` | Enable/disable document processing during knowledge graph build. |\n| `entity_extraction_switch` | bool | same as `doc_process_switch` | Enable/disable entity extraction during knowledge graph build. |\n| `community_detection_switch` | bool | same as `entity_extraction_switch` | Enable/disable community detection during knowledge graph build. |\n| `load_batch_size` | int | `500` | Batch size for document loading. |\n| `upsert_delay` | int | `0` | Delay in seconds between loading batches. |\n| `default_concurrency` | int | `10` | Base concurrency level for parallel processing. Configurable per graph. |\n| `process_interval_seconds` | int | `300` | Interval (seconds) for background consistency processing. |\n| `cleanup_interval_seconds` | int | `300` | Interval (seconds) for background cleanup. |\n| `checker_batch_size` | int | `100` | Batch size for background consistency checking. |\n| `enable_consistency_checker` | bool | `false` | Enable the background consistency checker. |\n| `graph_names` | list | `[]` | Graphs to monitor when consistency checker is enabled. |\n\n### Chat History Configuration\nCopy the below code into `configs/server_config.json`. You shouldn’t need to change anything unless you change the port of the chat history service in the Docker Compose file.\n\n```json\n{\n    \"chat-history\": {\n        \"apiPort\":\"8002\",\n        \"dbPath\": \"chats.db\",\n        \"dbLogPath\": \"db.log\",\n        \"logPath\": \"requestLogs.jsonl\",\n        \"conversationAccessRoles\": [\"superuser\", \"globaldesigner\"]\n    }\n}\n```\n\n[Go back to top](#top)\n\n\n### LLM provider configuration\nIn the `llm_config` section of `configs/server_config.json` file, copy JSON config template from below for your LLM provider, and fill out the appropriate fields. Only one provider is needed.\n\n#### Structure overview\n\n```json\n{\n  \"llm_config\": {\n    \"authentication_configuration\": {\n      \"OPENAI_API_KEY\": \"sk-...\"\n    },\n    \"completion_service\": {\n      \"llm_service\": \"openai\",\n      \"llm_model\": \"gpt-4.1-mini\",\n      \"model_kwargs\": { \"temperature\": 0 },\n      \"prompt_path\": \"./common/prompts/openai_gpt4/\"\n    },\n    \"embedding_service\": {\n      \"embedding_model_service\": \"openai\",\n      \"model_name\": \"text-embedding-3-small\"\n    },\n    \"chat_service\": {\n      \"llm_model\": \"gpt-4.1\"\n    },\n    \"multimodal_service\": {\n      \"llm_service\": \"openai\",\n      \"llm_model\": \"gpt-4o\"\n    }\n  }\n}\n```\n\n- `authentication_configuration`: Shared credentials for all services. Service-level keys take precedence over top-level keys.\n- `completion_service` **(required)**: LLM for knowledge graph building and query generation.\n- `embedding_service` **(required)**: Text embedding model for document indexing.\n- `chat_service` *(optional)*: Chatbot LLM override. Missing keys are inherited from `completion_service`. Configurable per graph.\n- `multimodal_service` *(optional)*: Vision/image model for document ingestion.\n\n#### Supported parameters\n\n**Top-level `llm_config` parameters:**\n\n| Parameter | Type | Default | Description |\n| --- | --- | --- | --- |\n| `authentication_configuration` | object | — | Shared authentication credentials for all services. Service-level values take precedence. |\n| `token_limit` | int | — | Hard cap on token count for retrieved context sent to the LLM. Context exceeding this limit is truncated. Inherited by all services if not set at service level. `0` or omitted means unlimited. |\n\n**`completion_service` parameters:**\n\n| Parameter | Type | Required | Default | Description |\n| --- | --- | --- | --- | --- |\n| `llm_service` | string | **Yes** | — | LLM provider. Options: `openai`, `azure`, `vertexai`, `genai`, `bedrock`, `sagemaker`, `groq`, `ollama`, `huggingface`, `watsonx`. |\n| `llm_model` | string | **Yes** | — | Model name for knowledge graph building and query generation (e.g., `gpt-4.1-mini`). |\n| `authentication_configuration` | object | No | inherited from top-level | Service-specific auth credentials. Overrides top-level values. |\n| `model_kwargs` | object | No | `{}` | Additional model parameters (e.g., `{\"temperature\": 0}`). |\n| `prompt_path` | string | No | `\"./common/prompts/openai_gpt4/\"` | Path to prompt template files. |\n| `base_url` | string | No | — | Custom API endpoint URL. |\n| `token_limit` | int | No | inherited from top-level | Hard cap on token count for retrieved context sent to the LLM. Context exceeding this limit is truncated. `0` or omitted means unlimited. |\n\n**`embedding_service` parameters:**\n\n| Parameter | Type | Required | Default | Description |\n| --- | --- | --- | --- | --- |\n| `embedding_model_service` | string | **Yes** | — | Embedding provider. Options: `openai`, `azure`, `vertexai`, `genai`, `bedrock`, `ollama`. |\n| `model_name` | string | **Yes** | — | Embedding model name (e.g., `text-embedding-3-small`). |\n| `dimensions` | int | No | `1536` | Embedding vector dimensions. |\n| `authentication_configuration` | object | No | inherited from top-level | Service-specific auth credentials. Overrides top-level values. |\n\n**`chat_service` parameters (optional):**\n\nChatbot LLM override. If not configured, inherits from `completion_service`. Configurable per graph via the UI.\n\n| Parameter | Type | Required | Default | Description |\n| --- | --- | --- | --- | --- |\n| `llm_service` | string | No | same as completion | LLM provider for the chatbot. |\n| `llm_model` | string | No | same as completion | Model name for the chatbot. |\n| `authentication_configuration` | object | No | inherited from completion | Auth credentials. Service-level values take precedence. |\n| `model_kwargs` | object | No | inherited from completion | Additional model parameters (e.g., `{\"temperature\": 0}`). |\n| `prompt_path` | string | No | inherited from completion | Path to prompt template files. |\n| `base_url` | string | No | inherited from completion | Custom API endpoint URL. |\n| `token_limit` | int | No | inherited from completion | Hard cap on token count for retrieved context sent to the chatbot LLM. Context exceeding this limit is truncated. `0` or omitted means unlimited. |\n\n**`multimodal_service` parameters (optional):**\n\nVision model for image processing during document ingestion. If not configured, inherits from `completion_service` — ensure the completion model supports vision input.\n\n| Parameter | Type | Required | Default | Description |\n| --- | --- | --- | --- | --- |\n| `llm_service` | string | No | inherited from completion | Multimodal LLM provider. |\n| `llm_model` | string | No | inherited from completion | Vision model name (e.g., `gpt-4o`). |\n| `authentication_configuration` | object | No | inherited from completion | Service-specific auth credentials. Overrides top-level values. |\n| `model_kwargs` | object | No | inherited from completion | Additional model parameters. |\n| `prompt_path` | string | No | inherited from completion | Path to prompt template files. |\n\n#### Provider examples\n\n#### OpenAI\nIn addition to the `OPENAI_API_KEY`, `llm_model` and `model_name` can be edited to match your specific configuration details.\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"openai\",\n            \"model_name\": \"text-embedding-3-small\",\n            \"authentication_configuration\": {\n                \"OPENAI_API_KEY\": \"YOUR_OPENAI_API_KEY_HERE\"\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"openai\",\n            \"llm_model\": \"gpt-4.1-mini\",\n            \"authentication_configuration\": {\n                \"OPENAI_API_KEY\": \"YOUR_OPENAI_API_KEY_HERE\"\n            },\n            \"model_kwargs\": {\n                \"temperature\": 0\n            },\n            \"prompt_path\": \"./common/prompts/openai_gpt4/\"\n        }\n    }\n}\n```\n\n#### Google GenAI\n\nGet your Gemini API key via https://aistudio.google.com/app/apikey.\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"genai\",\n            \"model_name\": \"models/gemini-embedding-exp-03-07\",\n            \"dimensions\": 1536,\n            \"authentication_configuration\": {\n                \"GOOGLE_API_KEY\": \"YOUR_GOOGLE_API_KEY_HERE\"\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"genai\",\n            \"llm_model\": \"gemini-2.5-flash\",\n            \"authentication_configuration\": {\n                \"GOOGLE_API_KEY\": \"YOUR_GOOGLE_API_KEY_HERE\"\n            },\n            \"model_kwargs\": {\n                \"temperature\": 0\n            },\n            \"prompt_path\": \"./common/prompts/google_gemini/\"\n        }\n    }\n}\n```\n\n#### GCP VertexAI\n\nFollow the GCP authentication information found here: https://cloud.google.com/docs/authentication/application-default-credentials#GAC and create a Service Account with VertexAI credentials. Then add the following to the docker run command:\n\n```sh\n-v $(pwd)/configs/SERVICE_ACCOUNT_CREDS.json:/SERVICE_ACCOUNT_CREDS.json -e GOOGLE_APPLICATION_CREDENTIALS=/SERVICE_ACCOUNT_CREDS.json\n```\n\nAnd your JSON config should follow as:\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"vertexai\",\n            \"model_name\": \"GCP-text-bison\",\n            \"authentication_configuration\": {}\n        },\n        \"completion_service\": {\n            \"llm_service\": \"vertexai\",\n            \"llm_model\": \"text-bison\",\n            \"model_kwargs\": {\n                \"temperature\": 0\n            },\n            \"prompt_path\": \"./common/prompts/gcp_vertexai_palm/\"\n        }\n    }\n}\n```\n\n#### Azure\n\nIn addition to the `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `azure_deployment`, `llm_model` and `model_name` can be edited to match your specific configuration details.\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"azure\",\n            \"model_name\": \"GPT35Turbo\",\n            \"azure_deployment\":\"YOUR_EMBEDDING_DEPLOYMENT_HERE\",\n            \"authentication_configuration\": {\n                \"OPENAI_API_TYPE\": \"azure\",\n                \"OPENAI_API_VERSION\": \"2022-12-01\",\n                \"AZURE_OPENAI_ENDPOINT\": \"YOUR_AZURE_ENDPOINT_HERE\",\n                \"AZURE_OPENAI_API_KEY\": \"YOUR_AZURE_API_KEY_HERE\"\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"azure\",\n            \"azure_deployment\": \"YOUR_COMPLETION_DEPLOYMENT_HERE\",\n            \"openai_api_version\": \"2023-07-01-preview\",\n            \"llm_model\": \"gpt-35-turbo-instruct\",\n            \"authentication_configuration\": {\n                \"OPENAI_API_TYPE\": \"azure\",\n                \"AZURE_OPENAI_ENDPOINT\": \"YOUR_AZURE_ENDPOINT_HERE\",\n                \"AZURE_OPENAI_API_KEY\": \"YOUR_AZURE_API_KEY_HERE\"\n            },\n            \"model_kwargs\": {\n                \"temperature\": 0\n            },\n            \"prompt_path\": \"./common/prompts/azure_open_ai_gpt35_turbo_instruct/\"\n        }\n    }\n}\n```\n\n#### AWS Bedrock\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"bedrock\",\n            \"model_name\":\"amazon.titan-embed-text-v2\",\n            \"region_name\":\"us-west-2\",\n            \"authentication_configuration\": {\n                \"AWS_ACCESS_KEY_ID\": \"ACCESS_KEY\",\n                \"AWS_SECRET_ACCESS_KEY\": \"SECRET\"\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"bedrock\",\n            \"llm_model\": \"us.anthropic.claude-3-7-sonnet-20250219-v1:0\",\n            \"region_name\":\"us-west-2\",\n            \"authentication_configuration\": {\n                \"AWS_ACCESS_KEY_ID\": \"ACCESS_KEY\",\n                \"AWS_SECRET_ACCESS_KEY\": \"SECRET\"\n            },\n            \"model_kwargs\": {\n                \"temperature\": 0,\n            },\n            \"prompt_path\": \"./common/prompts/aws_bedrock_claude3haiku/\"\n        }\n    }\n}\n```\n\n#### Ollama\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"ollama\",\n            \"base_url\": \"http://ollama:11434\",\n            \"model_name\": \"nomic-embed-text\",\n            \"dimensions\": 768,\n            \"authentication_configuration\": {\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"ollama\",\n            \"base_url\": \"http://ollama:11434\",\n            \"llm_model\": \"calebfahlgren/natural-functions\",\n            \"model_kwargs\": {\n                \"temperature\": 0.0000001\n            },\n            \"prompt_path\": \"./common/prompts/openai_gpt4/\"\n        }\n    }\n}\n```\n\n#### Hugging Face\n\nExample configuration for a model on Hugging Face with a dedicated endpoint is shown below. Please specify your configuration details:\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"openai\",\n            \"model_name\": \"llama3-8b\",\n            \"authentication_configuration\": {\n                \"OPENAI_API_KEY\": \"\"\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"huggingface\",\n            \"llm_model\": \"hermes-2-pro-llama-3-8b-lpt\",\n            \"endpoint_url\": \"https:endpoints.huggingface.cloud\",\n            \"authentication_configuration\": {\n                \"HUGGINGFACEHUB_API_TOKEN\": \"\"\n            },\n            \"model_kwargs\": {\n                \"temperature\": 0.1\n            },\n            \"prompt_path\": \"./common/prompts/openai_gpt4/\"\n        }\n    }\n}\n```\n\nExample configuration for a model on Hugging Face with a serverless endpoint is shown below. Please specify your configuration details:\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"openai\",\n            \"model_name\": \"Llama3-70b\",\n            \"authentication_configuration\": {\n                \"OPENAI_API_KEY\": \"\"\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"huggingface\",\n            \"llm_model\": \"meta-llama/Meta-Llama-3-70B-Instruct\",\n            \"authentication_configuration\": {\n                \"HUGGINGFACEHUB_API_TOKEN\": \"\"\n            },\n            \"model_kwargs\": {\n                \"temperature\": 0.1\n            },\n            \"prompt_path\": \"./common/prompts/llama_70b/\"\n        }\n    }\n}\n```\n\n#### Groq\n\n```json\n{\n    \"llm_config\": {\n        \"embedding_service\": {\n            \"embedding_model_service\": \"openai\",\n            \"model_name\": \"mixtral-8x7b-32768\",\n            \"authentication_configuration\": {\n                \"OPENAI_API_KEY\": \"\"\n            }\n        },\n        \"completion_service\": {\n            \"llm_service\": \"groq\",\n            \"llm_model\": \"mixtral-8x7b-32768\",\n            \"authentication_configuration\": {\n                \"GROQ_API_KEY\": \"\"\n            },\n            \"model_kwargs\": {\n                \"temperature\": 0.1\n            },\n            \"prompt_path\": \"./common/prompts/openai_gpt4/\"\n        }\n    }\n}\n```\n\n[Go back to top](#top)\n\n---\n\n## Customization and Extensibility\nTigerGraph GraphRAG is designed to be easily extensible. The service can be configured to use different LLM providers, different graph schemas, and different LangChain tools. The service can also be extended to use different embedding services, different LLM generation services, and different LangChain tools. For more information on how to extend the service, see the [Developer Guide](./docs/DeveloperGuide.md).\n\n### Test Your Code Changes\nA family of tests are included under the `tests` directory. If you would like to add more tests please refer to the [guide here](./docs/DeveloperGuide.md#adding-a-new-test-suite). A shell script `run_tests.sh` is also included in the folder which is the driver for running the tests. The easiest way to use this script is to execute it in the Docker Container for testing.\n\n#### Testing with Pytest\nYou can run testing for each service by going to the top level of the service's directory and running `python -m pytest`\n\ne.g. (from the top level)\n```sh\ncd graphrag\npython -m pytest\ncd ..\n```\n\n#### Test Code Change in Docker Container\n\nFirst, make sure that all your LLM service provider configuration files are working properly. The configs will be mounted for the container to access. Also make sure that all the dependencies such as database are ready. If not, you can run the included docker compose file to create those services.\n```sh\ndocker compose up -d --build\n```\n\nIf you want to use Weights And Biases for logging the test results, your WandB API key needs to be set in an environment variable on the host machine.\n\n```sh\nexport WANDB_API_KEY=KEY HERE\n```\n\nThen, you can build the docker container from the `Dockerfile.tests` file and run the test script in the container.\n```sh\ndocker build -f Dockerfile.tests -t graphrag-tests:0.1 .\n\ndocker run -d -v $(pwd)/configs/:/ -e GOOGLE_APPLICATION_CREDENTIALS=/GOOGLE_SERVICE_ACCOUNT_CREDS.json -e WANDB_API_KEY=$WANDB_API_KEY -it --name graphrag-tests graphrag-tests:0.1\n\n\ndocker exec graphrag-tests bash -c \"conda run --no-capture-output -n py39 ./run_tests.sh all all\"\n```\n\n### Test Script Options\n\nTo edit what tests are executed, one can pass arguments to the `./run_tests.sh` script. Currently, one can configure what LLM service to use (defaults to all), what schemas to test against (defaults to all), and whether or not to use Weights and Biases for logging (defaults to true). Instructions of the options are found below:\n\n#### Configure LLM Service\nThe first parameter to `run_tests.sh` is what LLMs to test against. Defaults to `all`. The options are:\n\n* `all` - run tests against all LLMs\n* `azure_gpt35` - run tests against GPT-3.5 hosted on Azure\n* `openai_gpt35` - run tests against GPT-3.5 hosted on OpenAI\n* `openai_gpt4` - run tests on GPT-4 hosted on OpenAI\n* `gcp_textbison` - run tests on text-bison hosted on GCP\n\n#### Configure Testing Graphs\nThe second parameter to `run_tests.sh` is what graphs to test against. Defaults to `all`. The options are:\n\n* `all` - run tests against all available graphs\n* `OGB_MAG` - The academic paper dataset provided by: https://ogb.stanford.edu/docs/nodeprop/#ogbn-mag.\n* `DigtialInfra` - Digital infrastructure digital twin dataset\n* `Synthea` - Synthetic health dataset\n\n#### Configure Weights and Biases\nIf you wish to log the test results to Weights and Biases (and have the correct credentials setup above), the final parameter to `run_tests.sh` automatically defaults to true. If you wish to disable Weights and Biases logging, use `false`.\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftigergraph%2Fgraphrag","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftigergraph%2Fgraphrag","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftigergraph%2Fgraphrag/lists"}