Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ashot72/langchain-documentation-helper
LangChain Documentation Helper
https://github.com/ashot72/langchain-documentation-helper
chatgpt embeddings langchain langchain-js large-language-models pinecone vector-database vectorstore
Last synced: 3 days ago
JSON representation
LangChain Documentation Helper
- Host: GitHub
- URL: https://github.com/ashot72/langchain-documentation-helper
- Owner: Ashot72
- Created: 2023-05-13T17:04:31.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-05-16T06:34:58.000Z (over 1 year ago)
- Last Synced: 2024-11-08T03:23:43.037Z (about 2 months ago)
- Topics: chatgpt, embeddings, langchain, langchain-js, large-language-models, pinecone, vector-database, vectorstore
- Language: HTML
- Homepage:
- Size: 13.8 MB
- Stars: 1
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# LangChain Documentation Helper
[LangChain](https://js.langchain.com/docs/) is de facto go to framework for building LLM (Large Language Model) based applications. It gains massive popularity lately for developers wanting to get into AI (Artificial Intelligence) and to build AI based applications.
I built very basic LangChain Node.js Documentation Helper app using LLM (Large Language Model). We will be able to ask some questions and get answers. You will see how we split our various files into chunks and put them into a vector database using embeddings. We use [Pinecone](https://www.pinecone.io/) vector database. Then we create a chain which takes the query (prompt), embeds it as a vector, then takes couple of vectors which are closest to the query vector semantically and returns it. These relevant chunks can contain the answer or have a high probability of containing the answer and only those chunks will be sent to the LLM. In that way we only make a couple of API calls or even one and we can save a lot of money and get response a lot faster and not doing any redundant work. So, we pass the prompt plus the relevant chunks (context) to the LLM to get the answer.
We can also view source documents that we were used to retrieve the answer. This can be useful if we want to allow the user to see the sources used to generate the answer.
We also want our chat to have the ability to remember and reference things that we asked in the past when talking to ChatGPT.
To get started.
```
Clone the repositorygit clone https://github.com/Ashot72/LangChain-Documentation-Helper
cd LangChain-Documentation-HelperAdd your keys to .env file
# installs dependencies
npm install# to embed
npm run embed
# to run locally
npm start
```Go to [LangChain Documentation Helper Video](https://youtu.be/c9ujzXuMx9Y) page
Go to [LangChain Documentation Helper description](https://ashot72.github.io/LangChain-Documentation-Helper/doc.html) page