https://github.com/4darsh-dev/medicure-rag-chatbot
An AI chatbot implementing RAG technique with Meta-Llama2-7b Large Language Model, along with langchain and pinecone vector databases.
https://github.com/4darsh-dev/medicure-rag-chatbot
chatbotai huggingface-transformers langchain llama2-7b medical-chatbot pineconedb python retrieval-augmented-generation transformers vector-database
Last synced: about 1 month ago
JSON representation
An AI chatbot implementing RAG technique with Meta-Llama2-7b Large Language Model, along with langchain and pinecone vector databases.
- Host: GitHub
- URL: https://github.com/4darsh-dev/medicure-rag-chatbot
- Owner: 4darsh-Dev
- License: mit
- Created: 2024-07-04T19:30:41.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-07-25T08:44:40.000Z (9 months ago)
- Last Synced: 2025-03-10T06:53:13.252Z (about 2 months ago)
- Topics: chatbotai, huggingface-transformers, langchain, llama2-7b, medical-chatbot, pineconedb, python, retrieval-augmented-generation, transformers, vector-database
- Language: Python
- Homepage: https://huggingface.co/spaces/4darsh-Dev/medicure
- Size: 99.6 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/funding.yml
- License: LICENSE
Awesome Lists containing this project
README
# 💡Medicure RAG Chatbot🤖
An AI chatbot implementing RAG technique with Meta-Llama2-7b Large Language Model, along with langchain and pinecone vector databases.
Resource Used 📖 : The Gale Encyclopedia of Medicine## Screenshots
![]()
## Technologies Used
1. Streamlit- WebApp UI
2. Pinecone - Vector Database
3. Langchain and sentence-transformers - RetrieveQAChain and Embedding Model
4. Meta Llama-2-7b-chat quantized Model - Large Language Model(LLM) from Hugging Face Hub## Solution Approach
Pinecone vector db stores the text_chunks embeddings generated from the Book Pdf. LangChain is used for building the LLMChain with promptTemplate to perform the similarity search from pinecone and then fine-grain the output with LLM.## Running Web App Locally
To run web app locally, follow these steps:
1.**Clone the Repo**:
```bash
git clone https://github.com/4darsh-Dev/medicure-rag-chatbot.git
```2. **Configure poetry**:
```bash
pip install poetry
poetry init
poetry shell
```
3. **Install Requirements**:```bash
poetry install
```4. **Run the Streamlit App**:
```bash
poetry streamlit run app.py
```5. **Access Your App**: After running the command, Streamlit will start a local web server and provide a URL where you can access your app. Typically, it will be something like `http://localhost:8501`. Open this URL in your web browser.
6. **Stop the Streamlit Server**: To stop the Streamlit server, go back to the terminal or command prompt where it's running and press `Ctrl + C` to terminate the server.
# Hi, I'm Adarsh! 👋
## 🔗 Links
[](https://adarshmaurya.onionreads.com/)
[](https://www.linkedin.com/in/adarsh-maurya-dev/)## Feedback
If you have any feedback, please reach out to us at [email protected]