E2B: Integrating Language Models with Python Execution for Advanced Analytics

You’re not just reading about AI today — you’re about to build it.
Join Build Fast with AI’s Gen AI Launch Pad 2025—a 6-week transformation program designed to accelerate your AI mastery and empower you to build revolutionary applications.
What is E2B?
E2B (Execution to Bot) is a powerful Python library designed to integrate execution environments with language models. It enables you to:
- Execute Python code in a secure, sandboxed environment.
- Generate Python code dynamically based on natural language.
- Automate workflows with the power of language models.
- Visualize data and conduct advanced analytics effortlessly.
Why Use E2B?
Traditional language models can generate code effectively, but executing that code safely within your environment can be challenging due to security risks or dependency conflicts. E2B addresses this by offering:
- Sandboxed Execution: Run code in isolated environments to prevent harm to your system.
- Real-Time Execution: Immediate feedback and results from executed code.
- Error Handling: Detailed error messages to troubleshoot code issues.
- Versatility: Useful for data analysis, visualization, automation, and more.
E2B allows developers and data scientists to combine language model capabilities with robust execution environments, making tasks like automated code generation, data analysis, and problem-solving easier and more efficient.
LangChain + Function Calling + E2B Integration
This notebook demonstrates how to integrate LangChain, function calling, and E2B to build an environment where a language model can:
- Generate Python code dynamically based on user input.
- Execute that code in a secure sandbox.
- Return the results for further analysis or visualization.
What is LangChain?
LangChain is a framework designed for developing applications powered by large language models (LLMs). It allows developers to:
- Chain together multiple LLM calls and tools to create complex workflows.
- Integrate external tools and APIs for enhanced capabilities.
- Create prompt templates and manage chains effectively.
Combining LangChain with E2B enables a seamless workflow where language models not only generate code but also execute and refine it based on real-time results.
Let’s break this integration down step-by-step.
Step 1: Install the Required Libraries
The first step is installing the necessary libraries for E2B and LangChain. Run the following command:
%pip install e2b_code_interpreter==1.0.0 langchain langchainhub langchain-openai
Explanation of the Libraries:
- e2b_code_interpreter: Provides the core functionality for code execution within a sandboxed environment.
- langchain: A framework for developing applications powered by language models.
- langchainhub: Offers resources and templates for LangChain development.
- langchain-openai: Provides integrations with OpenAI’s language models.
Ensure that your environment is set up correctly by verifying the installations.
Step 2: Import Libraries
After installation, import the necessary libraries to set up the execution environment:
from e2b_code_interpreter import CodeInterpreter from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain
Detailed Breakdown of Imports:
- CodeInterpreter: This class handles the secure execution of code within E2B.
- OpenAI: This class from LangChain interfaces with OpenAI’s language models like GPT-3 or GPT-4.
- PromptTemplate: Helps create reusable templates for prompting the language model.
- LLMChain: Chains together prompts and language model calls for sequential tasks.
Step 3: Initialize the Code Interpreter
Create an instance of the CodeInterpreter
to enable code execution:
code_interpreter = CodeInterpreter()
Why Use a Code Interpreter?
The CodeInterpreter
provides a safe environment to run code generated by the language model. This prevents potential security risks and ensures the code runs in isolation.
Step 4: Set Up the Language Model
Initialize the language model using LangChain’s OpenAI
class:
llm = OpenAI(temperature=0)
Parameters Explained:
- Temperature: Controls the randomness of the output. A temperature of
0
makes the output deterministic, ensuring the same prompt yields the same result.
Step 5: Create a Prompt Template
Use PromptTemplate
to create structured prompts that guide the language model in generating Python code:
prompt_template = PromptTemplate( input_variables=["task"], template=""" You are a Python programmer. Your task is to generate Python code that can complete the following task: {task} """ )
Detailed Explanation:
input_variables
: Defines placeholders for dynamic inputs, in this case,task
.- Template: The instruction given to the model, guiding it to generate Python code relevant to the task provided.
You can customize this template to suit different coding scenarios or add constraints to refine the output.
Step 6: Chain the LLM with Code Execution
Create an LLMChain
to link the language model’s code generation with the execution step:
chain = LLMChain(llm=llm, prompt=prompt_template)
What is an LLMChain?
An LLMChain combines:
- A language model (
llm
) that generates responses. - A prompt template that structures the input for the model.
This chain allows you to repeatedly generate and execute code based on different tasks.
Step 7: Execute a Task
Now let’s execute a sample task. For example, let’s generate and run code to calculate the sum of the first 10 numbers:
# Define the task user_task = "Calculate the sum of the first 10 natural numbers." # Generate Python code generated_code = chain.run(task=user_task) print("Generated Code:\n", generated_code) # Execute the generated code output = code_interpreter.run(generated_code) print("Execution Result:\n", output)
Step-by-Step Breakdown:
- Define the task: Specify the task in plain English.
- Generate code: The language model produces Python code for the task.
- Execute code: Run the generated code using
code_interpreter.run()
.
Sample Output:
Generated Code: sum(range(1, 11)) Execution Result: 55
The model generates and executes the code correctly, returning the expected result.
Step 8: Visualizing Data
You can extend this to more complex tasks like plotting graphs. For instance, let’s visualize a quadratic function:
# Define the visualization task user_task = "Plot a graph of y = x^2 for x ranging from -10 to 10." # Generate Python code generated_code = chain.run(task=user_task) print("Generated Code:\n", generated_code) # Execute the generated code output = code_interpreter.run(generated_code) print("Execution Result:\n", output)
Expected Outcome:
A plot of the function for values from -10 to 10, displayed within the notebook.
Conclusion
By integrating E2B, LangChain, and function calling, you can automate complex workflows, generate and execute code dynamically, and simplify tasks like data analysis and visualization.
Resources
- E2B (Execution to Bot) Open Source GitHub Repo
- Build Fast With AI E2B (Execution to Bot) GitHub Repository
- OpenAI API Documentation
---------------------------------
Stay Updated:- Follow Build Fast with AI pages for all the latest AI updates and resources.
Experts predict 2025 will be the defining year for Gen AI implementation.Want to be ahead of the curve?
Join Build Fast with AI’s Gen AI Launch Pad 2025 - your accelerated path to mastering AI tools and building revolutionary applications.