How to Use Gemini URL Context for Smarter, Real-Time AI Responses
Boost your LLM with Gemini URL Context. Ground responses in real-time web data using Google’s GenAI SDK for precise, fresh, and reliable outputs.

How to Use Gemini URL Context for Smarter, Real-Time AI Responses
Large Language Models (LLMs) are incredibly powerful, but they often struggle with a key limitation: their knowledge is frozen at the time of their last training. This means they can't access live, up-to-the-minute information from the web. How can you build applications that reason about current events, compare real-time data, or synthesize information from specific online sources?
The answer lies in the Gemini URL Context tool. This powerful feature in the Google GenAI SDK allows your models to fetch and analyze live content directly from URLs you provide, grounding their responses in fresh, specific, and relevant data.
In this tutorial, we will walk you through everything you need to know to get started with the Gemini URL Context tool. You will learn how to:
Set up your environment and authenticate with the Gemini API.
Compare content from multiple web pages in a single request.
Inspect the metadata to see which URLs were fetched.
Synthesize information from various sources into a structured JSON output.
Combine URL Context with Google Search for powerful, agentic workflows.
Step-by-Step Tutorial: Using the Gemini URL Context Tool
Step 1: Install the Google GenAI SDK
First, you need to install the necessary Python package. This command will install the google-genai
library, giving you access to the Gemini API.
%pip install -U -q google-genai
You should see an output indicating that the package has been successfully installed.
Step 2: Set Up Your API Key and Client
To use the Gemini API, you need to configure your API key. We will import the necessary libraries and then set up the client. Make sure you have your Gemini API key stored as a secret in your environment.
from google import genai
from google.genai import types
from google.colab import userdata
API_KEY = userdata.get('GEMINI_API_KEY')
if not API_KEY:
raise RuntimeError("GEMINI_API_KEY not found")
client = genai.Client(api_key=API_KEY)
MODEL_ID = "gemini-2.5-flash"
Step 3: Compare Two Web Pages
Now, let's dive into a practical example. We will ask the model to compare two recipes for roast chicken from the Food Network website. To do this, we enable the url_context
tool in our request.
from google.genai.types import GenerateContentConfig
tools = [
{"url_context": {}},
]
url1 = "https://www.foodnetwork.com/recipes/ina-garten/perfect-roast-chicken-recipe-1940592"
url2 = "https://www.foodnetwork.com/recipes/perfect-roast-chicken-3645195"
response = client.models.generate_content(
model=MODEL_ID,
contents=f"Compare the ingredients and cooking times from the recipes at {url1} and {url2}",
config=types.GenerateContentConfig(
tools=tools,
),
)
print(response.text)
The model will fetch the content from both URLs and provide a detailed comparison of the ingredients, cooking times, and other key differences.
Step 4: Inspect URL Retrieval Metadata
It's good practice to verify which URLs were fetched by the model. You can access the url_context_metadata
on the response object to see the status of each URL retrieval.
metadata = getattr(response.candidates[0], "url_context_metadata", None)
if metadata and getattr(metadata, "url_metadata", None):
for item in metadata.url_metadata:
print(f"retrieved_url: {item.retrieved_url}")
print(f"status: {item.url_retrieval_status}")
print("-")
else:
print("No url_context_metadata present or tool not used in this response.")
The output will show each URL that was successfully retrieved.
Enhancements: Advanced Usage
Synthesize Across Multiple URLs with Structured Output
The URL Context tool is even more powerful when you need to synthesize information from multiple sources and structure it in a specific format. In this example, we will ask the model to read two recipe URLs and return a JSON object containing a summary of each page and a list of cross-findings.
from pydantic import BaseModel
import json
class PageSummary(BaseModel):
url: str
title: str | None = None
key_points: list[str]
class Synthesis(BaseModel):
pages: list[PageSummary]
cross_findings: list[str]
urls = [
"https://www.foodnetwork.com/recipes/ina-garten/perfect-roast-chicken-recipe-1940592",
"https://www.foodnetwork.com/recipes/perfect-roast-chicken-3645195",
]
response = client.models.generate_content(
model=MODEL_ID,
contents=(
"Read the following URLs and return a concise ONLY JSON with: for each page (url, title, 3-5 key_points) and 3-5 cross_findings that compare/contrast them.\n"
+ "\n".join(urls)
),
config=types.GenerateContentConfig(
tools=[{"url_context": {}}],
response_schema=Synthesis,
),
)
print((response.text))
The model will return a clean JSON object that you can easily parse and use in your applications.
Combine URL Context with Google Search
For the ultimate agentic workflow, you can combine the url_context
tool with google_search
. This allows the model to first search for relevant URLs and then fetch their content for deeper analysis.
tools = [
{"url_context": {}},
{"google_search": {}},
]
prompt = (
"Find 2-3 recent articles about urban micromobility trends (e-scooters, e-bikes), "
"then provide a 5-bullet synthesis with a short source attribution after each bullet."
)
response = client.models.generate_content(
model=MODEL_ID,
contents=prompt,
config=types.GenerateContentConfig(tools=tools),
)
print(response.text)
Conclusion
The Gemini URL Context tool is a game-changer for building LLM applications that require real-time, web-grounded knowledge. By following this tutorial, you have learned how to use this tool to compare, synthesize, and analyze information from across the web.
Key Takeaways
Live Data: The URL Context tool connects your LLM to live web data.
Multi-Source Analysis: You can easily compare and synthesize information from up to 20 URLs in a single request.
Structured Output: Combine URL Context with response schemas to get structured JSON output for easy integration into your workflows.
Agentic Workflows: Combine URL Context with Google Search to create powerful, autonomous agents that can find and analyze information.
References
Colab Notebook: Explore the code in this tutorial on Google Colab.
Official Documentation: Google AI for Developers
GitHub Repository: Gen AI Experiments
Community & Call to Action
We encourage you to try out the code, experiment with different URLs, and see what you can build!
If you found this tutorial helpful, please share it with your network and ⭐ star our GitHub repository.
Website: buildfastwithai.com
LinkedIn: Build Fast with AI
X (formerly Twitter): @buildfastwithai
Telegram: Build Fast with AI Community
AI That Keeps You Ahead
Get the latest AI insights, tools, and frameworks delivered to your inbox. Join builders who stay ahead of the curve.
You Might Also Like

How FAISS is Revolutionizing Vector Search: Everything You Need to Know
Discover FAISS, the ultimate library for fast similarity search and clustering of dense vectors! This in-depth guide covers setup, vector stores, document management, similarity search, and real-world applications. Master FAISS to build scalable, AI-powered search systems efficiently! 🚀

Smolagents a Smol Library to build great Agents
In this blog post, we delve into smolagents, a powerful library designed to build intelligent agents with code. Whether you're a machine learning enthusiast or a seasoned developer, this guide will help you explore the capabilities of smolagents, showcasing practical applications and use cases.

Building with LLMs: A Practical Guide to API Integration
This blog explores the most popular large language models and their integration capabilities for building chatbots, natural language search, and other LLM-based products. We’ll also explain how to choose the right LLM for your business goals and examine real-world use cases.