Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/build-on-aws/rag-golang-postgresql-langchain
Implement RAG (using LangChain and PostgreSQL) for Go applications to improve the accuracy and relevance of LLM outputs
https://github.com/build-on-aws/rag-golang-postgresql-langchain
amazon-bedrock generative-ai golang langchain large-language-models pgvector postgresql
Last synced: about 1 month ago
JSON representation
Implement RAG (using LangChain and PostgreSQL) for Go applications to improve the accuracy and relevance of LLM outputs
- Host: GitHub
- URL: https://github.com/build-on-aws/rag-golang-postgresql-langchain
- Owner: build-on-aws
- License: mit-0
- Created: 2024-04-13T01:59:02.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-04-19T13:06:50.000Z (9 months ago)
- Last Synced: 2024-12-17T17:40:56.048Z (about 1 month ago)
- Topics: amazon-bedrock, generative-ai, golang, langchain, large-language-models, pgvector, postgresql
- Language: Go
- Homepage: https://community.aws/content/2f1mRXuakNO22izRKDVNRazzxhb
- Size: 189 KB
- Stars: 22
- Watchers: 4
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# How to use Retrieval Augmented Generation (RAG) for Go applications
> Implement RAG (using LangChain and PostgreSQL) to improve the accuracy and relevance of LLM outputs
This repository contains source code corresponding to the blog post [How to use Retrieval Augmented Generation (RAG) for Go applications](https://community.aws/content/2f1mRXuakNO22izRKDVNRazzxhb) which covers how to leverage the [Go programming language](https://go.dev/) to use Vector Databases and techniques such as Retrieval Augmented Generation (RAG) with [langchaingo](https://github.com/tmc/langchaingo).
![](arch.png)
Large Language Models (LLMs) and other foundation models have been trained on a large corpus of data enabling them to perform well at many natural language processing (NLP) tasks. But one of the most important limitations is that most foundation models and LLMs use a static dataset which often has a specific knowledge cut-off (say, January 2023).
RAG (Retrieval Augmented Generation) enhances LLMs by dynamically retrieving external information during the response generation process, thereby expanding the model's knowledge base beyond its original training data. RAG-based solutions incorporate a vector store which can be indexed and queried to retrieve the most recent and relevant information, thereby extending the LLM's knowledge beyond its training cut-off. When an LLM equipped with RAG needs to generate a response, it first queries a vector store to find relevant, up-to-date information related to the query. This process ensures that the model's outputs are not just based on its pre-existing knowledge but are augmented with the latest information, thereby improving the accuracy and relevance of its responses.
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This library is licensed under the MIT-0 License. See the LICENSE file.