https://github.com/langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
https://github.com/langfuse/langfuse
analytics evaluation gpt hacktoberfest langchain large-language-models llama-index llm llm-evaluation llm-observability llmops monitoring observability open-source openai playground prompt-engineering prompt-management self-hosted ycombinator
Last synced: 10 days ago
JSON representation
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
- Host: GitHub
- URL: https://github.com/langfuse/langfuse
- Owner: langfuse
- License: other
- Created: 2023-05-18T17:47:09.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-10-29T09:53:17.000Z (6 months ago)
- Last Synced: 2024-10-29T09:57:16.598Z (6 months ago)
- Topics: analytics, evaluation, gpt, hacktoberfest, langchain, large-language-models, llama-index, llm, llm-evaluation, llm-observability, llmops, monitoring, observability, open-source, openai, playground, prompt-engineering, prompt-management, self-hosted, ycombinator
- Language: TypeScript
- Homepage: https://langfuse.com/docs
- Size: 14.5 MB
- Stars: 6,151
- Watchers: 23
- Forks: 595
- Open Issues: 147
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Codeowners: .github/CODEOWNERS
- Security: SECURITY.md
Awesome Lists containing this project
- Awesome-LLM-Productization - Langfuse - Open source observability and analytics for LLM applications (Models and Tools / LLM Deployment)
- awesome-ai-sdks - GitHub
- awesome-llmops - Langfuse - square) | (LLMOps / Observability)
- awesome-open-data-centric-ai - langfuse
- awesome-ai - langfuse - Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. (LLM Engineering Platform)
- awesome-ai - langfuse - Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. (LLM Engineering Platform)
- awesome-production-machine-learning - Langfuse - Langfuse is an observability & analytics solution for LLM-based applications. (Evaluation and Monitoring)
- StarryDivineSky - langfuse/langfuse - 适用于 Typescript、Python、OpenAI、Langchain、Litellm、Flowise、Superagent 和 Langflow 的稳定 SDK + 集成 (A01_文本生成_文本对话 / 大语言对话模型及数据)
- Awesome-LLM - Langfuse - Open Source LLM Engineering Platform 🪢 Tracing, Evaluations, Prompt Management, Evaluations and Playground. (LLM Deployment)
- awesome - langfuse/langfuse - 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23 (TypeScript)
- project-awesome - langfuse/langfuse - 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23 (TypeScript)
- Awesome-RAG - LangFuse - source tool for tracking LLM metrics, observability, and prompt management. (📊 Metrics / Response Evaluation Metrics)
- AiTreasureBox - langfuse/langfuse - 04-15_10354_3](https://img.shields.io/github/stars/langfuse/langfuse.svg)|🪢 Open source LLM engineering platform. Observability, metrics, evals, prompt management, testing, prompt playground, datasets, LLM evaluations -- 🍊YC W23 🤖 integrate via Typescript, Python / Decorators, OpenAI, Langchain, LlamaIndex, Litellm, Instructor, Mistral, Perplexity, Claude, Gemini, Vertex| (Repos)
- awesome-ChatGPT-repositories - langfuse - 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23 (Prompts)
- awesome_ai_agents - Langfuse - 🪢 Open source LLM engineering platform - LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Llama… (Building / Prompt Engineering)
- awesome_ai_agents - Langfuse - 🪢 Open source LLM engineering platform - LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Llama… (Building / Prompt Engineering)
- jimsghstars - langfuse/langfuse - 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23 (TypeScript)
- Awesome-LLM-RAG-Application - langfuse
- awesome-safety-critical-ai - `langfuse/langfuse`
README

Langfuse uses Github Discussions for Support and Feature Requests.
We're hiring. Join us in product engineering and technical go-to-market roles.
Langfuse is an **open source LLM engineering** platform. It helps teams collaboratively
**develop, monitor, evaluate,** and **debug** AI applications. Langfuse can be **self-hosted in minutes** and is **battle-tested**.[](https://langfuse.com/watch-demo)
## ✨ Core Features

- [LLM Application Observability](https://langfuse.com/docs/tracing): Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions. Try the interactive [demo](https://langfuse.com/docs/demo) to see this in action.
- [Prompt Management](https://langfuse.com/docs/prompts/get-started) helps you centrally manage, version control, and collaboratively iterate on your prompts. Thanks to strong caching on server and client side, you can iterate on prompts without adding latency to your application.
- [Evaluations](https://langfuse.com/docs/scores/overview) are key to the LLM application development workflow, and Langfuse adapts to your needs. It supports LLM-as-a-judge, user feedback collection, manual labeling, and custom evaluation pipelines via APIs/SDKs.
- [Datasets](https://langfuse.com/docs/datasets/overview) enable test sets and benchmarks for evaluating your LLM application. They support continuous improvement, pre-deployment testing, structured experiments, flexible evaluation, and seamless integration with frameworks like LangChain and LlamaIndex.
- [LLM Playground](https://langfuse.com/docs/playground) is a tool for testing and iterating on your prompts and model configurations, shortening the feedback loop and accelerating development. When you see a bad result in tracing, you can directly jump to the playground to iterate on it.
- [Comprehensive API](https://langfuse.com/docs/api): Langfuse is frequently used to power bespoke LLMOps workflows while using the building blocks provided by Langfuse via the API. OpenAPI spec, Postman collection, and typed SDKs for Python, JS/TS are available.
## 📦 Deploy Langfuse

### Langfuse Cloud
Managed deployment by the Langfuse team, generous free-tier (hobby plan), no credit card required.
### Self-Host Langfuse
Run Langfuse on your own infrastructure:
- [Local (docker compose)](https://langfuse.com/self-hosting/local): Run Langfuse on your own machine in 5 minutes using Docker Compose.
```bash
# Get a copy of the latest Langfuse repository
git clone https://github.com/langfuse/langfuse.git
cd langfuse# Run the langfuse docker compose
docker compose up
```- [Kubernetes (Helm)](https://langfuse.com/self-hosting/kubernetes-helm): Run Langfuse on a Kubernetes cluster using Helm. This is the preferred production deployment.
- [VM](https://langfuse.com/self-hosting/docker-compose): Run Langfuse on a single Virtual Machine using Docker Compose.
- Planned: Cloud-specific deployment guides, please upvote and comment on the following threads: [AWS](https://github.com/orgs/langfuse/discussions/4645), [Google Cloud](https://github.com/orgs/langfuse/discussions/4646), [Azure](https://github.com/orgs/langfuse/discussions/4647).See [self-hosting documentation](https://langfuse.com/self-hosting) to learn more about the architecture and configuration options.
## 🔌 Integrations

### Main Integrations:
| Integration | Supports | Description |
| ---------------------------------------------------------------------------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| [SDK](https://langfuse.com/docs/sdk) | Python, JS/TS | Manual instrumentation using the SDKs for full flexibility. |
| [OpenAI](https://langfuse.com/docs/integrations/openai) | Python, JS/TS | Automated instrumentation using drop-in replacement of OpenAI SDK. |
| [Langchain](https://langfuse.com/docs/integrations/langchain) | Python, JS/TS | Automated instrumentation by passing callback handler to Langchain application. |
| [LlamaIndex](https://langfuse.com/docs/integrations/llama-index/get-started) | Python | Automated instrumentation via LlamaIndex callback system. |
| [Haystack](https://langfuse.com/docs/integrations/haystack) | Python | Automated instrumentation via Haystack content tracing system. |
| [LiteLLM](https://langfuse.com/docs/integrations/litellm) | Python, JS/TS (proxy only) | Use any LLM as a drop in replacement for GPT. Use Azure, OpenAI, Cohere, Anthropic, Ollama, VLLM, Sagemaker, HuggingFace, Replicate (100+ LLMs). |
| [Vercel AI SDK](https://langfuse.com/docs/integrations/vercel-ai-sdk) | JS/TS | TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js. |
| [API](https://langfuse.com/docs/api) | | Directly call the public API. OpenAPI spec available. |### Packages integrated with Langfuse:
| Name | Type | Description |
| ----------------------------------------------------------------------- | ------------------ | ----------------------------------------------------------------------------------------------------------------------- |
| [Instructor](https://langfuse.com/docs/integrations/instructor) | Library | Library to get structured LLM outputs (JSON, Pydantic) |
| [DSPy](https://langfuse.com/docs/integrations/dspy) | Library | Framework that systematically optimizes language model prompts and weights |
| [Mirascope](https://langfuse.com/docs/integrations/mirascope) | Library | Python toolkit for building LLM applications. |
| [Ollama](https://langfuse.com/docs/integrations/ollama) | Model (local) | Easily run open source LLMs on your own machine. |
| [Amazon Bedrock](https://langfuse.com/docs/integrations/amazon-bedrock) | Model | Run foundation and fine-tuned models on AWS. |
| [AutoGen](https://langfuse.com/docs/integrations/autogen) | Agent Framework | Open source LLM platform for building distributed agents. |
| [Flowise](https://langfuse.com/docs/integrations/flowise) | Chat/Agent UI | JS/TS no-code builder for customized LLM flows. |
| [Langflow](https://langfuse.com/docs/integrations/langflow) | Chat/Agent UI | Python-based UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows. |
| [Dify](https://langfuse.com/docs/integrations/dify) | Chat/Agent UI | Open source LLM app development platform with no-code builder. |
| [OpenWebUI](https://langfuse.com/docs/integrations/openwebui) | Chat/Agent UI | Self-hosted LLM Chat web ui supporting various LLM runners including self-hosted and local models. |
| [Promptfoo](https://langfuse.com/docs/integrations/promptfoo) | Tool | Open source LLM testing platform. |
| [LobeChat](https://langfuse.com/docs/integrations/lobechat) | Chat/Agent UI | Open source chatbot platform. |
| [Vapi](https://langfuse.com/docs/integrations/vapi) | Platform | Open source voice AI platform. |
| [Inferable](https://langfuse.com/docs/integrations/other/inferable) | Agents | Open source LLM platform for building distributed agents. |
| [Gradio](https://langfuse.com/docs/integrations/other/gradio) | Chat/Agent UI | Open source Python library to build web interfaces like Chat UI. |
| [Goose](https://langfuse.com/docs/integrations/goose) | Agents | Open source LLM platform for building distributed agents. |
| [smolagents](https://langfuse.com/docs/integrations/smolagents) | Agents | Open source AI agents framework. |
| [CrewAI](https://langfuse.com/docs/integrations/crewai) | Agents | Multi agent framework for agent collaboration and tool use. |## 🚀 Quickstart
Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions.
### 1️⃣ Create new project
1. [Create Langfuse account](https://cloud.langfuse.com/auth/sign-up) or [self-host](https://langfuse.com/self-hosting)
2. Create a new project
3. Create new API credentials in the project settings### 2️⃣ Log your first LLM call
The [`@observe()` decorator](https://langfuse.com/docs/sdk/python/decorators) makes it easy to trace any Python LLM application. In this quickstart we also use the Langfuse [OpenAI integration](https://langfuse.com/docs/integrations/openai) to automatically capture all model parameters.
> [!TIP]
> Not using OpenAI? Visit [our documentation](https://langfuse.com/docs/get-started#log-your-first-llm-call-to-langfuse) to learn how to log other models and frameworks.```bash
pip install langfuse openai
``````bash filename=".env"
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_HOST="https://us.cloud.langfuse.com" # 🇺🇸 US region
``````python /@observe()/ /from langfuse.openai import openai/ filename="main.py"
from langfuse.decorators import observe
from langfuse.openai import openai # OpenAI integration@observe()
def story():
return openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is Langfuse?"}],
).choices[0].message.content@observe()
def main():
return story()main()
```### 3️⃣ See traces in Langfuse
See your language model calls and other application logic in Langfuse.

_[Public example trace in Langfuse](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/2cec01e3-3dc2-472f-afcf-3b968cf0c1f4?timestamp=2025-02-10T14%3A27%3A30.275Z&observation=cb5ff844-07ef-41e6-b8e2-6c64344bc13b)_
> [!TIP]
>
> [Learn more](https://langfuse.com/docs/tracing) about tracing in Langfuse or play with the [interactive demo](https://langfuse.com/docs/demo).## ⭐️ Star Us

## 💭 Support
Finding an answer to your question:
- Our [documentation](https://langfuse.com/docs) is the best place to start looking for answers. It is comprehensive, and we invest significant time into maintaining it. You can also suggest edits to the docs via GitHub.
- [Langfuse FAQs](https://langfuse.com/faq) where the most common questions are answered.
- Use "[Ask AI](https://langfuse.com/docs/ask-ai)" to get instant answers to your questions.Support Channels:
- **Ask any question in our [public Q&A](https://github.com/orgs/langfuse/discussions/categories/support) on GitHub Discussions.** Please include as much detail as possible (e.g. code snippets, screenshots, background information) to help us understand your question.
- [Request a feature](https://github.com/orgs/langfuse/discussions/categories/ideas) on GitHub Discussions.
- [Report a Bug](https://github.com/langfuse/langfuse/issues) on GitHub Issues.
- For time-sensitive queries, ping us via the in-app chat widget.## 🤝 Contributing
Your contributions are welcome!
- Vote on [Ideas](https://github.com/orgs/langfuse/discussions/categories/ideas) in GitHub Discussions.
- Raise and comment on [Issues](https://github.com/langfuse/langfuse/issues).
- Open a PR - see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to setup a development environment.## 🥇 License
This repository is MIT licensed, except for the `ee` folders. See [LICENSE](LICENSE) and [docs](https://langfuse.com/docs/open-source) for more details.
## ⭐️ Star History
## ❤️ Open Source Projects Using Langfuse
Top open-source Python projects that use Langfuse, ranked by stars ([Source](https://github.com/langfuse/langfuse-docs/blob/main/components-mdx/dependents)):
| Repository | Stars |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----: |
|[langgenius](https://github.com/langgenius) / [dify](https://github.com/langgenius/dify) | 54865 |
|[open-webui](https://github.com/open-webui) / [open-webui](https://github.com/open-webui/open-webui) | 51531 |
|[lobehub](https://github.com/lobehub) / [lobe-chat](https://github.com/lobehub/lobe-chat) | 49003 |
|[langflow-ai](https://github.com/langflow-ai) / [langflow](https://github.com/langflow-ai/langflow) | 39093 |
|[run-llama](https://github.com/run-llama) / [llama_index](https://github.com/run-llama/llama_index) | 37368 |
|[chatchat-space](https://github.com/chatchat-space) / [Langchain-Chatchat](https://github.com/chatchat-space/Langchain-Chatchat) | 32486 |
|[FlowiseAI](https://github.com/FlowiseAI) / [Flowise](https://github.com/FlowiseAI/Flowise) | 32448 |
|[mindsdb](https://github.com/mindsdb) / [mindsdb](https://github.com/mindsdb/mindsdb) | 26931 |
|[twentyhq](https://github.com/twentyhq) / [twenty](https://github.com/twentyhq/twenty) | 24195 |
|[PostHog](https://github.com/PostHog) / [posthog](https://github.com/PostHog/posthog) | 22618 |
|[BerriAI](https://github.com/BerriAI) / [litellm](https://github.com/BerriAI/litellm) | 15151 |
|[mediar-ai](https://github.com/mediar-ai) / [screenpipe](https://github.com/mediar-ai/screenpipe) | 11037 |
|[formbricks](https://github.com/formbricks) / [formbricks](https://github.com/formbricks/formbricks) | 9386 |
|[anthropics](https://github.com/anthropics) / [courses](https://github.com/anthropics/courses) | 8385 |
|[GreyDGL](https://github.com/GreyDGL) / [PentestGPT](https://github.com/GreyDGL/PentestGPT) | 7374 |
|[superagent-ai](https://github.com/superagent-ai) / [superagent](https://github.com/superagent-ai/superagent) | 5391 |
|[promptfoo](https://github.com/promptfoo) / [promptfoo](https://github.com/promptfoo/promptfoo) | 4976 |
|[onlook-dev](https://github.com/onlook-dev) / [onlook](https://github.com/onlook-dev/onlook) | 4141 |
|[Canner](https://github.com/Canner) / [WrenAI](https://github.com/Canner/WrenAI) | 2526 |
|[pingcap](https://github.com/pingcap) / [autoflow](https://github.com/pingcap/autoflow) | 2061 |
|[MLSysOps](https://github.com/MLSysOps) / [MLE-agent](https://github.com/MLSysOps/MLE-agent) | 1161 |
|[open-webui](https://github.com/open-webui) / [pipelines](https://github.com/open-webui/pipelines) | 1100 |
|[alishobeiri](https://github.com/alishobeiri) / [thread](https://github.com/alishobeiri/thread) | 1074 |
|[topoteretes](https://github.com/topoteretes) / [cognee](https://github.com/topoteretes/cognee) | 971 |
|[bRAGAI](https://github.com/bRAGAI) / [bRAG-langchain](https://github.com/bRAGAI/bRAG-langchain) | 823 |
|[opslane](https://github.com/opslane) / [opslane](https://github.com/opslane/opslane) | 677 |
|[dynamiq-ai](https://github.com/dynamiq-ai) / [dynamiq](https://github.com/dynamiq-ai/dynamiq) | 639 |
|[theopenconversationkit](https://github.com/theopenconversationkit) / [tock](https://github.com/theopenconversationkit/tock) | 514 |
|[andysingal](https://github.com/andysingal) / [llm-course](https://github.com/andysingal/llm-course) | 394 |
|[phospho-app](https://github.com/phospho-app) / [phospho](https://github.com/phospho-app/phospho) | 384 |
|[sentient-engineering](https://github.com/sentient-engineering) / [agent-q](https://github.com/sentient-engineering/agent-q) | 370 |
|[sql-agi](https://github.com/sql-agi) / [DB-GPT](https://github.com/sql-agi/DB-GPT) | 324 |
|[PostHog](https://github.com/PostHog) / [posthog-foss](https://github.com/PostHog/posthog-foss) | 305 |
|[vespperhq](https://github.com/vespperhq) / [vespper](https://github.com/vespperhq/vespper) | 304 |
|[block](https://github.com/block) / [goose](https://github.com/block/goose) | 295 |
|[aorwall](https://github.com/aorwall) / [moatless-tools](https://github.com/aorwall/moatless-tools) | 291 |
|[dmayboroda](https://github.com/dmayboroda) / [minima](https://github.com/dmayboroda/minima) | 221 |
|[RobotecAI](https://github.com/RobotecAI) / [rai](https://github.com/RobotecAI/rai) | 172 |
|[i-am-alice](https://github.com/i-am-alice) / [3rd-devs](https://github.com/i-am-alice/3rd-devs) | 148 |
|[8090-inc](https://github.com/8090-inc) / [xrx-sample-apps](https://github.com/8090-inc/xrx-sample-apps) | 138 |
|[babelcloud](https://github.com/babelcloud) / [LLM-RGB](https://github.com/babelcloud/LLM-RGB) | 135 |
|[souzatharsis](https://github.com/souzatharsis) / [tamingLLMs](https://github.com/souzatharsis/tamingLLMs) | 129 |
|[LibreChat-AI](https://github.com/LibreChat-AI) / [librechat.ai](https://github.com/LibreChat-AI/librechat.ai) | 128 |
|[deepset-ai](https://github.com/deepset-ai) / [haystack-core-integrations](https://github.com/deepset-ai/haystack-core-integrations) | 126 |
## 🔒 Security & Privacy
We take data security and privacy seriously. Please refer to our [Security and Privacy](https://langfuse.com/security) page for more information.
### Telemetry
By default, Langfuse automatically reports basic usage statistics of self-hosted instances to a centralized server (PostHog).
This helps us to:
1. Understand how Langfuse is used and improve the most relevant features.
2. Track overall usage for internal and external (e.g. fundraising) reporting.None of the data is shared with third parties and does not include any sensitive information. We want to be super transparent about this and you can find the exact data we collect [here](/web/src/features/telemetry/index.ts).
You can opt-out by setting `TELEMETRY_ENABLED=false`.