https://github.com/prakashjha1/document-chat-using-gpt
Developed a chat system that allows users to interact with documents using a large language model (LLM). The system uses a retrieval-augmented generation (RAG) approach, which combines the power of LLMs with the ability to retrieve relevant information from a document corpus.
https://github.com/prakashjha1/document-chat-using-gpt
gpt-35-turbo langchain llm python retrieval-augmented-generation streamlit
Last synced: 4 months ago
JSON representation
Developed a chat system that allows users to interact with documents using a large language model (LLM). The system uses a retrieval-augmented generation (RAG) approach, which combines the power of LLMs with the ability to retrieve relevant information from a document corpus.
- Host: GitHub
- URL: https://github.com/prakashjha1/document-chat-using-gpt
- Owner: prakashjha1
- Created: 2023-10-31T12:54:30.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-10-31T13:33:14.000Z (over 1 year ago)
- Last Synced: 2025-01-16T04:14:48.954Z (6 months ago)
- Topics: gpt-35-turbo, langchain, llm, python, retrieval-augmented-generation, streamlit
- Language: Jupyter Notebook
- Homepage:
- Size: 5.86 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Document-Chat-Using-GPT

Developed a chat system that allows users to interact with documents using a large language
model (LLM). The system uses a retrieval-augmented generation (RAG) approach, which
combines the power of LLMs with the ability to retrieve relevant information from a document
corpus.When a user asks a question, the system first retrieves a set of relevant documents from the
corpus. The system then uses the LLM to generate a response based on the retrieved documents
and the user's question.
The system is able to answer questions about the content of the documents in a comprehensive
and informative way. The system is also able to generate creative responses, such as
summaries, explanations of the documents.#### Requirements to run this notebook:
1. OpenAI api key
2. use google colab (Embedding model will take time based on the size of the document)#### Types of supported documents:
1. pdf
2. docx
3. txt#### Steps to run this project:
1. Open this jupyter notebook on colab .
2. create gptConfig.ini file and add your OpenAI API key in the file like this:
[authorization]
api_key = your_api_key3. run all the cells of the jupyter notebook
4. some file will be generated automatically. Open log.txt file
5. copy external ip address without port number.
6. click on the link generated by the last cell. it will redirect you to new page.
7. paste the copied ip address on the new page. streamlit will open.
8. link can be shared with anyone to show this project but the condition is colab should be running while using app.