Langfuse: Open Source LLM Engineering Platform

How do you become an AI innovator? Start with the right guide, and this is your first step.
Join Build Fast with AI’s Gen AI Launch Pad 2025—a 6-week program designed to empower you with the tools and skills to lead in AI innovation.
Introduction
Large Language Models (LLMs) like OpenAI's GPT have transformed the way we interact with AI, offering tools to create dynamic, intelligent applications. However, the development, debugging, and management of these applications can be challenging. Enter Langfuse: an open-source platform designed to simplify LLM engineering through tools for tracing, prompt management, and evaluation.
Langfuse provides a streamlined approach to managing LLM-driven projects by enabling developers to efficiently version, debug, and analyze prompts and model behavior. This blog will guide you through using Langfuse, explaining its features with practical code examples, real-world applications, and best practices for maximizing its potential.
By the end of this blog, you will:
- Understand Langfuse’s core functionalities.
- Learn how to integrate Langfuse into your LLM projects.
- See how to manage prompts, evaluate outputs, and build robust applications with ease.
Getting Started with Langfuse
Setup and Installation
To get started, you’ll need to install Langfuse and a few related libraries. Langfuse supports integration with OpenAI models and other LLM frameworks. Install the necessary packages as follows:
!pip install langfuse langchain langchain-openai
Configuring API Keys
Langfuse requires API keys for authentication and integration. Here’s how to set up:
- Retrieve your keys: Sign up and get your project keys from Langfuse Cloud.
- Set up keys in your environment:
import os from google.colab import userdata os.environ["LANGFUSE_PUBLIC_KEY"] = userdata.get('LANGFUSE_PUBLIC_KEY') os.environ["LANGFUSE_SECRET_KEY"] = userdata.get('LANGFUSE_SECRET_KEY') os.environ["OPENAI_API_KEY"] = userdata.get('OPENAI_API_KEY')
Explanation:
- The environment variables
LANGFUSE_PUBLIC_KEY
andLANGFUSE_SECRET_KEY
authenticate your Langfuse access. OPENAI_API_KEY
integrates OpenAI models for prompt execution.
Real-world Use Case: Applications requiring API-heavy tasks, such as interactive chatbots or text summarizers, benefit greatly from Langfuse’s credential management.
Building with Langfuse: Core Features
1. Managing Prompts with Langfuse
Langfuse simplifies prompt management by enabling developers to create, modify, and version prompts programmatically.
Define a Prompt
Here’s an example of creating a prompt for summarizing stories:
from langfuse import Langfuse langfuse = Langfuse() langfuse.create_prompt( name="story_summarization", prompt="Extract the key information from this text and return it in JSON format. Use the following schema: {{json_schema}}", config={ "model": "gpt-3.5-turbo-1106", "temperature": 0, "json_schema": { "main_character": "string (name of protagonist)", "key_content": "string (1 sentence)", "keywords": "array of strings", "genre": "string (genre of story)", "critic_review_comment": "string (similar to a New York Times critic)", "critic_score": "number (0-10)" } }, labels=["production"] )
Explanation:
create_prompt
: Defines and stores a prompt in Langfuse.config
: Specifies model settings and the JSON schema for output.
Expected Output:
Langfuse stores the prompt for reuse and versioning, making it easier to manage changes.
Real-world Use Case:
For applications like automated content generation or knowledge extraction, prompt management ensures consistency across different iterations.
2. Retrieving and Compiling Prompts
Langfuse allows developers to retrieve and compile the latest version of a prompt dynamically.
prompt = langfuse.get_prompt("story_summarization") prompt.compile(json_schema="TEST SCHEMA")
Explanation:
get_prompt
: Retrieves the current version of the specified prompt.compile
: Dynamically replaces placeholders (e.g.,{{json_schema}}
) with provided values.
Expected Output:
'Extract the key information from this text and return it in JSON format. Use the following schema: TEST SCHEMA'
Real-world Use Case:
Dynamic prompt compilation is ideal for chatbots or other interactive systems where input variables change frequently.
3. Functional Summarization with Langfuse
Using the managed prompt, let’s build a function to summarize stories programmatically:
from langfuse.openai import OpenAI import json client = OpenAI() def summarize_story(story): json_schema_str = ', '.join([f"'{key}': {value}" for key, value in prompt.config["json_schema"].items()]) system_message = prompt.compile(json_schema=json_schema_str) messages = [ {"role": "system", "content": system_message}, {"role": "user", "content": story} ] res = client.chat.completions.create( model=prompt.config["model"], temperature=prompt.config["temperature"], messages=messages, response_format={"type": "json_object"}, langfuse_prompt=prompt ) return json.loads(res.choices[0].message.content)
Explanation:
summarize_story
: Leverages Langfuse’s managed prompts and OpenAI’s API to process and summarize stories.langfuse_prompt
: Tracks prompt usage for debugging and improvement.
Expected Output:
Given a story input, the function generates a structured JSON output:
STORY = """In a bustling city... Whisper discovered her true happiness.""" summarize_story(STORY)
Output:
{ "main_character": "Whisper", "key_content": "Whisper, a lonely cat, discovers an abandoned hat with the power to make her invisible and uses it to help the less fortunate, finding true happiness in her secret kindness.", "keywords": ["Whisper", "lonely cat", "abandoned hat", "invisible", "less fortunate", "true happiness"], "genre": "Fantasy", "critic_review_comment": "Whisper's heartwarming journey through the bustling city is a touching tale of selflessness and discovery.", "critic_score": 9 }
Real-world Use Case:
This function is ideal for summarizing user-generated content, news articles, or creative stories in applications requiring structured data.
Integrating Langfuse with LangChain
Langfuse provides seamless integration with LangChain for enhanced tracing and prompt management in AI workflows.
Setting Up the Integration
from langfuse.callback import CallbackHandler langfuse_callback_handler = CallbackHandler() assert langfuse.auth_check() assert langfuse_callback_handler.auth_check()
Explanation:
CallbackHandler
: Enables tracing and logging in LangChain workflows.auth_check
: Confirms proper configuration of Langfuse credentials.
Real-world Use Case:
Combine Langfuse with LangChain for projects involving multi-step LLM interactions, where traceability and debugging are crucial.
Conclusion
Langfuse revolutionizes LLM engineering with its comprehensive tools for prompt management, debugging, and tracing. Whether optimizing chatbot responses or building advanced AI workflows, Langfuse simplifies the process while ensuring consistent, high-quality outputs.
Key Takeaways:
- Langfuse simplifies prompt versioning and debugging.
- Integration with OpenAI and LangChain enables scalable, traceable applications.
- Real-world applications range from content summarization to interactive event planning.
Resources
- Langfuse Documentation
- LangChain Official Site
- OpenAI API
- JSON Schema
- Langfuse Build Fast With AI NoteBook
---------------------------------
Stay Updated:- Follow Build Fast with AI pages for all the latest AI updates and resources.
Experts predict 2025 will be the defining year for Gen AI implementation.Want to be ahead of the curve?
Join Build Fast with AI’s Gen AI Launch Pad 2025 - your accelerated path to mastering AI tools and building revolutionary applications.