Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/chibuikeeugene/llm_app_question_and_answer
A large language model that takes any website as an input together with a user query prompt and returns a suitable human response. This program uses a llama3 model locally hosted to return human readable and understandable results.
https://github.com/chibuikeeugene/llm_app_question_and_answer
generative-ai langchain-python llm
Last synced: about 1 month ago
JSON representation
A large language model that takes any website as an input together with a user query prompt and returns a suitable human response. This program uses a llama3 model locally hosted to return human readable and understandable results.
- Host: GitHub
- URL: https://github.com/chibuikeeugene/llm_app_question_and_answer
- Owner: chibuikeeugene
- Created: 2024-07-03T08:15:14.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-09-12T10:25:56.000Z (4 months ago)
- Last Synced: 2024-09-12T21:50:56.245Z (4 months ago)
- Topics: generative-ai, langchain-python, llm
- Language: Jupyter Notebook
- Homepage:
- Size: 1.42 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# LLM Q&A with any website
A large language model that takes any website as an input together with a user query prompt and returns a suitable human response. This program uses a **llama3 model locally hosted** to return human readable and understandable results.
## Dependencies and packages
1. python = "^3.10"
2. streamlit = "^1.36.0"
3. langchain = "^0.2.6"
4. llama 3
5. huggingface-hub = "^0.23.4"
6. python-dotenv = "^1.0.1"
7. langchain-community = "^0.2.6"
8. langchain-ollama = "^0.1.0"
9. beautifulsoup4 = "^4.12.3"
10. faiss-cpu = "^1.8.0.post1"
11. sphinx = "^8.0.2"
12. loguru = "^0.7.2"
13. fire = "^0.6.0"## To run the application
1. install llama3 and run server locally - using the command - ollama run llama3
2. Install the package dependencies using poetry install command. Ensure you do this in a virual environment.
3. navigate to the q_and_a sub-folder and run the command - streamlit run main.py
4. Good luck !## Snapshots
### Q&A architecture
![Q&A app schema]()
### Live view of our application in workmode with Streamlit as the frontend
![Streamlit app 1]()
![Streamlit app 2]()