https://github.com/iamaziz/mini_rag_llm
A minimal example for in-memory RAG using ChromaDB and Ollama LLM
https://github.com/iamaziz/mini_rag_llm
chromadb langchain llm-rag llms ollama rag
Last synced: about 1 month ago
JSON representation
A minimal example for in-memory RAG using ChromaDB and Ollama LLM
- Host: GitHub
- URL: https://github.com/iamaziz/mini_rag_llm
- Owner: iamaziz
- License: mit
- Created: 2024-01-12T08:26:12.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-01-19T22:40:30.000Z (over 1 year ago)
- Last Synced: 2025-04-20T00:38:13.690Z (about 2 months ago)
- Topics: chromadb, langchain, llm-rag, llms, ollama, rag
- Language: Python
- Homepage:
- Size: 18.6 KB
- Stars: 4
- Watchers: 2
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Toy RAG example
A minimal example for (in memory) RAG with Ollama LLM.
Using `Mixtral:8x7` LLM (via Ollama), `LangChain` (to load the model), and `ChromaDB` (to build and search the RAG index). More details in [What is RAG anyway?](https://altowayan.notion.site/altowayan/What-is-RAG-anyway-6a945b7b4e784eda8a707249a078937e)
To run this example, the following is required:
- Install [Ollama.ai](https://ollama.ai)
- download a local LLM: `ollama run mixtral` (requires at least ~50GB of RAM, smaller LLMs may work but I didn't test)
- `pip install -r requirements.txt` (venv recommended)Then run:
```bash
python mini_rag.py
```#### Example
https://github.com/iamaziz/mini_RAG_LLM/assets/3298308/ee7d12a4-1acd-4a0d-8d46-a90a20a98b5a
#### Simplified sequence
> [source](https://chat.openai.com/share/35ddfd24-f719-436f-9109-f735940957c7)