https://github.com/tharushashehan/contextualized-search
This application shows how to use LLM in private local environment through embedded ggml bins like falcon. With the use of GPT4ALL wrapper on top of python lang chain framework as a pipe line, LLM model is embedded and used.
https://github.com/tharushashehan/contextualized-search
Last synced: 4 months ago
JSON representation
This application shows how to use LLM in private local environment through embedded ggml bins like falcon. With the use of GPT4ALL wrapper on top of python lang chain framework as a pipe line, LLM model is embedded and used.
- Host: GitHub
- URL: https://github.com/tharushashehan/contextualized-search
- Owner: tharushashehan
- License: apache-2.0
- Created: 2023-11-02T19:47:37.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-11-02T20:14:50.000Z (over 1 year ago)
- Last Synced: 2025-01-13T03:41:41.071Z (5 months ago)
- Size: 7.81 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# contextualized-search
This application shows how to use LLM in private local environment through embedded ggml bins like falcon. With the use of GPT4ALL wrapper on top of python lang chain framework as a pipe line, LLM model is embedded and used.