https://github.com/rockchinq/llm-embed-qa
Question answering system built with vector dbs and LLMs.
https://github.com/rockchinq/llm-embed-qa
embedding gpt llm milvus openai qa qaautomation
Last synced: 6 months ago
JSON representation
Question answering system built with vector dbs and LLMs.
- Host: GitHub
- URL: https://github.com/rockchinq/llm-embed-qa
- Owner: RockChinQ
- Created: 2023-08-11T03:45:35.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-09-10T12:47:11.000Z (almost 2 years ago)
- Last Synced: 2025-01-12T12:35:57.169Z (6 months ago)
- Topics: embedding, gpt, llm, milvus, openai, qa, qaautomation
- Language: Python
- Homepage:
- Size: 51.8 KB
- Stars: 4
- Watchers: 3
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# llm-embed-qa
Question answering system built with vector dbs and LLMs.
> Part of the content is implemented with reference to [michaelliao/llm-embedding-sample](https://github.com/michaelliao/llm-embedding-sample)
## Requirements (Default Component)
- Python 3.10
- Docker## Install
1. Clone this repo
2. Install requirements with `pip install -r requirements.txt`
3. Startup PostgreSQL with Docker```bash
docker run -d \
--rm \
--name pgvector \
-p 5432:5432 \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_USER=postgres \
-e POSTGRES_DB=postgres \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v /path/to/llm-embedding-qa/pg-data:/var/lib/postgresql/data \
-v /path/to/llm-embedding-qa/pg-init-script:/docker-entrypoint-initdb.d \
ankane/pgvector:latest
```**NOTE: replace /path/to/... with real path.**
4. Run `python main.py`, edit `config.yaml` to set your `api_key` of OpenAI.
5. Put your `markdown` format documents in `docs` folder.
- There are the wiki files of [QChatGPT](https://github.com/RockChinQ/QChatGPT) in `docs_examples` folder.
6. Run `python main.py` again, it will automatically build the vector database and start the server.## Usage
- `GET /ask`
- `content`: the content of the question
- `strict`: (Optional) skip LLM request if `strict=true` and no related answer found in vector db