What is Langfuse?
Langfuse is the open-source LLM engineering platform for debugging and improving your LLM applications. Capture complete traces of your LLM applications and agents with OpenTelemetry-based observability. Use traces to inspect failures and build evaluation datasets. The platform provides LLM tracing, prompt management, evaluation, human annotation, datasets, metrics, and a playground for experimentation. Langfuse integrates with all popular LLM and agent libraries including Python SDK, JS/TS SDK, OpenAI SDK, Langchain, LlamaIndex, LiteLLM, Dify, Flowise, Langflow, Vercel AI SDK, and Instructor. From observability to evaluation, Langfuse is the complete platform for production LLM development.
Key Features
- βLLM tracing and observability
- βOpenTelemetry-based traces
- βPrompt management
- βEvaluation and scoring
- βHuman annotation workflows
- βDataset management
- βMetrics and analytics
- βLLM Playground
- βLangchain integration
- βOpenAI SDK support
- βLlamaIndex integration
- βSelf-hosting option
π Privacy & Data Protection
Langfuse is developed in Germany (EU) and offers self-hosting for complete control over LLM traces and evaluation data. Keep prompts, completions, and user interactions on your own infrastructure. The open-source codebase is auditable. For organizations building LLM applications with sensitive data, self-hosted Langfuse ensures AI observability data stays within European or on-premises environments.
Best For
π° Pricing
Langfuse Cloud offers a free tier for getting started. Team and Enterprise plans add advanced features and support. Self-hosting is available for organizations requiring on-premises deployment. Visit langfuse.com for current pricing.
Ready to try Langfuse?
Join thousands of users who have switched to this European alternative.
Get Started with Langfuse β
