https://github.com/Nagi-ovo/CRAG-Ollama-Chat
Corrective RAG demo powerd by Ollama
https://github.com/Nagi-ovo/CRAG-Ollama-Chat
Last synced: 3 months ago
JSON representation
Corrective RAG demo powerd by Ollama
- Host: GitHub
- URL: https://github.com/Nagi-ovo/CRAG-Ollama-Chat
- Owner: Nagi-ovo
- License: mit
- Created: 2024-03-23T10:06:39.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-26T17:14:44.000Z (about 1 year ago)
- Last Synced: 2024-04-26T18:28:49.591Z (about 1 year ago)
- Language: Python
- Homepage:
- Size: 203 KB
- Stars: 24
- Watchers: 1
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Ollama - CRAG Ollama Chat
README
# CRAG Ollama Chat
![]()
> create by ideogram.ai
## Preview
Run the demo by :
1. Creat a `config.yaml` file with the format of `config.example.yaml` and fill in the required config:
```yaml
# APIs: If you aren't using ollama
openai_api_key: "sk-"
openai_api_base: "https://api.openai.com/v1/chat/completions" # Or your own proxy
google_api_key: "your_google_api_key" # Unnecessary
tavily_api_key: "tvly-" # A must for the Websearch tools, which you can create on https://app.tavily.com/# Ollama Config
run_local: "Yes" # Yes or No, if Yes, the you must have ollama running in ur PC
local_llm: "openhermes" # mistral, llama2 ...# Model Config
models: "openai" # If you want to achieve the best results# Document Config
# Support multiple websites reading
doc_url: # My blogs right now
- "https://nagi.fun/llm-5-transformer"
- "https://nagi.fun/llm-4-wavenet"
```2. Install dependencies by poetry or `pip install -r requirements.txt`
3. run the command below:```zsh
streamlit run app.py
```## References
- [langchain-ai/langgraph_crag_mistral](https://github.com/langchain-ai/langgraph/blob/2b42407f055dbb77331de46fe3a632ea24551347/examples/rag/langgraph_crag_mistral.ipynb)