Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/storia-ai/sage
Chat with any codebase in under two minutes | Fully local or via third-party APIs
https://github.com/storia-ai/sage
ai anthropic claude copilot developer-tools hacktoberfest hacktoberfest-2024 hacktoberfest2024 langchain llm openai pinecone python rag
Last synced: 4 days ago
JSON representation
Chat with any codebase in under two minutes | Fully local or via third-party APIs
- Host: GitHub
- URL: https://github.com/storia-ai/sage
- Owner: Storia-AI
- License: apache-2.0
- Created: 2024-08-23T23:14:55.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2024-10-30T01:12:18.000Z (22 days ago)
- Last Synced: 2024-11-09T21:03:11.628Z (11 days ago)
- Topics: ai, anthropic, claude, copilot, developer-tools, hacktoberfest, hacktoberfest-2024, hacktoberfest2024, langchain, llm, openai, pinecone, python, rag
- Language: Python
- Homepage: https://sage.storia.ai
- Size: 9.57 MB
- Stars: 1,030
- Watchers: 7
- Forks: 86
- Open Issues: 30
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- awesome-ChatGPT-repositories - sage - Chat with any codebase in under two minutes | Fully local or via third-party APIs (Langchain)
README
Sage: Chat with any codebase
Our chat window, showing a conversation with the Transformers library. ๐
***
**Sage** is like an open-source GitHub Copilot that helps you learn how a codebase works and how to integrate it into your project without spending hours sifting through the code.
# Main features
- **Dead-simple setup**. Follow our [quickstart guide](https://sage-docs.storia.ai/quickstart) to get started.
- **Runs locally or on the cloud**. When privacy is your priority, you can run the entire pipeline locally using [Ollama](https://ollama.com) for LLMs and [Marqo](https://github.com/marqo-ai/marqo) as a vector store. When optimizing for quality, you can use third-party LLM providers like OpenAI and Anthropic.
- **Wide range of built-in retrieval mechanisms**. We support both lightweight retrieval strategies (with nothing more but an LLM API key required) and more traditional RAG (which requires indexing the codebase). There are many knobs you can tune for retrieval to work well on your codebase.
- **Well-documented experiments**. We profile various strategies (for embeddings, retrieval etc.) on our own benchmark and thoroughly [document the results](benchmarks/retrieval/README.md).# Want your repository hosted?
We're working to make all code on the internet searchable and understandable for devs. You can check out [hosted app](https://sage.storia.ai). We pre-indexed a slew of OSS repos, and you can index your desired ones by simply pasting a GitHub URL.
If you're the maintainer of an OSS repo and would like a dedicated page on Code Sage (e.g. `sage.storia.ai/your-repo`), then send us a message at [[email protected]](mailto:[email protected]). We'll do it for free!
![](assets/sage.gif)
# Extensions & Contributions
We built the code purposefully modular so that you can plug in your desired embeddings, LLM and vector stores providers by simply implementing the relevant abstract classes.
Feel free to send feature requests to [[email protected]](mailto:[email protected]) or make a pull request!
# Contributors