https://github.com/egpivo/llmchatbot
Chatbot with LLM
https://github.com/egpivo/llmchatbot
bentoml chatbot chatgpt langchain llm
Last synced: 3 months ago
JSON representation
Chatbot with LLM
- Host: GitHub
- URL: https://github.com/egpivo/llmchatbot
- Owner: egpivo
- License: bsd-2-clause
- Created: 2023-12-12T13:37:40.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-02-05T05:33:49.000Z (over 1 year ago)
- Last Synced: 2025-03-17T14:08:23.717Z (7 months ago)
- Topics: bentoml, chatbot, chatgpt, langchain, llm
- Language: Python
- Homepage: https://pypi.org/project/llmchatbot/
- Size: 521 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: NEWS.md
- License: LICENSE
Awesome Lists containing this project
README
# LLM-Chatbot
## Installation
To get the Chatbot Python package by the following commands:
1. PyPy
```shell
pip install llmchatbot
```2. GitHub repository
```bash
pip install git+https://github.com/egpivo/llmchatbot.git
```
## Serving Automation
This repository automates the process of checking and fine-tuning pre-trained models for the Chatbot application. The automation script allows you to customize SpeechT5 and SWhisper models and enables retraining if needed.### Serving Process Flow
```mermaid
graph TD
A[Check if Model Exists]
B[Fine-Tune Model]
C[Load BentoML Configuration]
D[Serve the App]
E[Check SSL Certificates]
F[Generate Dummy SSL Certificates]A -- Yes --> C
A -- No --> B
B --> C
C --> D
D --> E
E -- No --> F
E -- Yes --> D```
### Artifact Folder
During the model serving process, the `artifacts` folder is used to store the BentoML artifacts, essential for serving the Chatbot application.
## Usage
### Local Model Serving
#### Default Model Values
Run the Chatbot service with default model values:
```shell
make local-serve
```
#### Customizing the Serving Process
Customize the Chatbot serving process using the automation script. Specify your desired models and options:```shell
bash scripts/run_app_service.sh \
--t5_pretrained_model {replace_with_actual_t5_model} \
--t5_pretrained_vocoder {replace_with_actual_t5_vocoder} \
--whisper_pretrained_model {replace_with_actual_whisper_model} \
--is_retraining
```
- **Note**: Replace `{replace_with_actual_t5_model}`, `{replace_with_actual_t5_vocoder}`, and `{replace_with_actual_whisper_model}` with your preferred values. Adding the `--is_retraining` flag forces model retraining.### Model Serving via Docker
#### By Makefile:
```shell
make docker-serve
```#### By `docker` CLI
- DockerHub
```shell
docker run -p 443:443 egpivo/chatbot:latest
```
- GitHub Package
```shell
docker run -p 443:443 ghcr.io/egpivo/llmchatbot:latest
```
### Client Side
Access the demo chatbot at `https://{ip}/chatbot`, with the default values being `0.0.0.0` for the `ip`.- Note: Dummy SSL certificates and keys are created by default for secure communication if `key.pem` and `cert.pem` do not exit in `artifacts/`. Or you can replace them manually.
## Demo
- Explore the demo site hosted on Alibaba Cloud via https://egpivo.com/chatbot/.
- Note: This site is intended for demo purposes only, and there is no guarantee of computing efficiency.
## Remark
- Reference: [BentoChain Repository](https://github.com/ssheng/BentoChain)
- **License:** [BSD 2-Clause License](LICENSE)