{"id":22883630,"url":"https://github.com/deadbits/llm-tools","last_synced_at":"2025-05-07T05:50:56.350Z","repository":{"id":181861832,"uuid":"665691537","full_name":"deadbits/llm-tools","owner":"deadbits","description":"Small tools to assist with using Large Language Models","archived":false,"fork":false,"pushed_at":"2023-11-07T00:30:59.000Z","size":74,"stargazers_count":11,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-03-31T07:02:02.848Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/deadbits.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2023-07-12T19:21:00.000Z","updated_at":"2024-07-11T15:12:46.000Z","dependencies_parsed_at":"2023-11-07T04:30:05.643Z","dependency_job_id":null,"html_url":"https://github.com/deadbits/llm-tools","commit_stats":null,"previous_names":["deadbits/llm-tools"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deadbits%2Fllm-tools","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deadbits%2Fllm-tools/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deadbits%2Fllm-tools/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deadbits%2Fllm-tools/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/deadbits","download_url":"https://codeload.github.com/deadbits/llm-tools/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252823693,"owners_count":21809709,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-12-13T18:39:27.633Z","updated_at":"2025-05-07T05:50:56.327Z","avatar_url":"https://github.com/deadbits.png","language":"Python","readme":"# llm-tools\nCollection of tools to assist with using Large Large Models (LLM)\n\n## Overview 📖\nThe ability to run an LLM on your home computer is a huge resource for productivity and development. This repo contains a handful of one-off scripts and demos for interacting locally hosted LLMs, and some examples using the LangChain, EmbedChain, and LlamaIndex frameworks.\n\n**Index**\n* [OpenAI](/openai)\n* [RedPajama](/redpajama/)\n* [Llama2](/llama2/)\n* [MPT-7B](/mpt-7b/)\n* [Vicuna](/vicuna/)\n* [EmbedChain](/embedchain/)\n\n### ⭐ Featured: embedchain helper\n[embedchain](https://github.com/embedchain/embedchain) makes it very easy to embed data, add it to a ChromaDB instance, and then ask questions about your data with an LLM. I created a small helpers to make this even easier: `ec-cli.py`\n\n```\n$ python ec-cli.py --help\nusage: ec-cli.py [-h] [-e EMBED] [--text TEXT] [-q QUERY] [-m {openai,llama2}]\n\nEmbedChain\n\noptions:\n  -h, --help            show this help message and exit\n  -e EMBED, --embed EMBED\n                        add new resource to db\n  --text TEXT           add text from local file\n  -q QUERY, --query QUERY\n                        Query the model\n  -m {openai,llama2}, --model {openai,llama2}\n                        llm model\n```\n\n![ec-cli.py demo](/assets/embedchain-cli.png)\n\nData added with the `--embed` or `--text` arguments is ingested into your ChromaDB.\nYou can also run [ec-api-server.py](/embedchain/ec-api-server.py) and posting to the `/embed` endpoint.\n\nYou can then query your data using the `--query` argument or the `/query` endpoint of the API server.\n\n## Stack\nRunning models and tools locally is all good and well, but pretty quickly you'll want a more robust stack for things like:\n\n* Inference hosting\n* Orchestration\n* Retrieving data from external sources\n* Providing access to external tools\n* [Managing prompts](https://github.com/deadbits/prompt-serve)\n* Application hosting\n* Interaction via common applications (iMessage, Telegram, etc.)\n* Maintain memory/history of past interactions\n* Embeddings model\n* Store vector embeddings and metadata\n* Manage documents prior to embeddings creation\n* Logging\n\nThe list below includes a few of my favorites:\n* [prompt-serve](https://github.com/deadbits/prompt-serve)\n* [LlamaIndex](https://github.com/jerryjliu/llama_index)\n* [embedchain](https://github.com/embedchain/embedchain)\n* [LangChain](https://python.langchain.com/docs/get_started/introduction.html)\n* [ChromaDB](https://www.trychroma.com/)\n* [FastChat](https://github.com/lm-sys/FastChat)\n* [Gradio](https://www.gradio.app/)\n* [OpenAI](https://openai.com/)\n* [RedPajama](https://www.together.xyz/blog/redpajama-models-v1)\n* [Mosaic ML](https://huggingface.co/mosaicml)\n* [GPTCache](https://github.com/zilliztech/GPTCache)\n* [Lambda Cloud](https://cloud.lambdalabs.com/)\n* [Metal](https://getmetal.io/)\n* [BentoML](https://github.com/ssheng/BentoChain)\n* [Modal](https://modal.com/)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeadbits%2Fllm-tools","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeadbits%2Fllm-tools","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeadbits%2Fllm-tools/lists"}