https://github.com/couchbase-examples/nvidia-rag-demo
https://github.com/couchbase-examples/nvidia-rag-demo
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/couchbase-examples/nvidia-rag-demo
- Owner: couchbase-examples
- License: mit
- Created: 2024-06-07T09:01:47.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2026-01-13T10:29:10.000Z (2 months ago)
- Last Synced: 2026-01-13T11:33:12.167Z (2 months ago)
- Language: Python
- Size: 43.9 KB
- Stars: 0
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## RAG Demo using Couchbase, Nvidia NIM, Meta LLama3, Langchain and Streamlit
This is a demo app built to chat with your custom PDFs using the vector search capabilities of Couchbase to augment the LLama 3 results in a Retrieval-Augmented-Generation (RAG) model, powered by Nvidia NIM.
### How does it work?
You can upload your PDFs with custom data & ask questions about the data in the chat box.
For each question, you will get two answers:
- one using RAG (Couchbase logo)
- one using pure LLM - LLama 3 (🤖).
For RAG, we are using Langchain, Couchbase Vector Search, NVidia NIM & Meta LLama3. We fetch parts of the PDF relevant to the question using Vector search & add it as the context to the LLM. The LLM is instructed to answer based on the context from the Vector Store.
### How to Run
- #### Install dependencies
`pip install -r requirements.txt`
- #### Set the environment secrets
Copy the `secrets.example.toml` file in `.streamlit` folder and rename it to `secrets.toml` and replace the placeholders with the actual values for your environment
```
NVIDIA_API_KEY = ""
DB_CONN_STR = ""
DB_USERNAME = ""
DB_PASSWORD = ""
DB_BUCKET = ""
DB_SCOPE = ""
DB_COLLECTION = ""
INDEX_NAME = ""
LOGIN_PASSWORD = ""
```
- #### Create the Search Index on Full Text Service
We need to create the Search Index on the Full Text Service in Couchbase. For this demo, you can import the following index using the instructions.
- [Couchbase Capella](https://docs.couchbase.com/cloud/search/import-search-index.html)
- Copy the index definition to a new file index.json
- Import the file in Capella using the instructions in the documentation.
- Click on Create Index to create the index.
- [Couchbase Server](https://docs.couchbase.com/server/current/search/import-search-index.html)
- Click on Search -> Add Index -> Import
- Copy the following Index definition in the Import screen
- Click on Create Index to create the index.
#### Index Definition
Here, we are creating the index `pdf_search` on the documents in the `docs` collection within the `shared` scope in the bucket `pdf-docs`. The Vector field is set to `embeddings` with 1024 dimensions and the text field set to `text`. We are also indexing and storing all the fields under `metadata` in the document as a dynamic mapping to account for varying document structures. The similarity metric is set to `dot_product`. If there is a change in these parameters, please adapt the index accordingly.
```
{
"name": "pdf_search",
"type": "fulltext-index",
"params": {
"doc_config": {
"docid_prefix_delim": "",
"docid_regexp": "",
"mode": "scope.collection.type_field",
"type_field": "type"
},
"mapping": {
"default_analyzer": "standard",
"default_datetime_parser": "dateTimeOptional",
"default_field": "_all",
"default_mapping": {
"dynamic": true,
"enabled": false
},
"default_type": "_default",
"docvalues_dynamic": false,
"index_dynamic": true,
"store_dynamic": false,
"type_field": "_type",
"types": {
"shared.docs": {
"dynamic": true,
"enabled": true,
"properties": {
"embedding": {
"enabled": true,
"dynamic": false,
"fields": [
{
"dims": 1024,
"index": true,
"name": "embedding",
"similarity": "dot_product",
"type": "vector",
"vector_index_optimized_for": "recall"
}
]
},
"text": {
"enabled": true,
"dynamic": false,
"fields": [
{
"index": true,
"name": "text",
"store": true,
"type": "text"
}
]
}
}
}
}
},
"store": {
"indexType": "scorch",
"segmentVersion": 16
}
},
"sourceType": "gocbcore",
"sourceName": "pdf-docs",
"sourceParams": {},
"planParams": {
"maxPartitionsPerPIndex": 64,
"indexPartitions": 16,
"numReplicas": 0
}
}
```
- #### Run the application with streamlit
`streamlit run chat_with_pdf.py`