Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agno.com/llms.txt

Use this file to discover all available pages before exploring further.

The Interactions API is an alternative to Gemini’s generateContent endpoint. Instead of sending the full conversation history on every turn, it stores prior turns server-side and references them via previous_interaction_id. This reduces token costs and latency through implicit caching. See the Interactions API documentation for more details.
The Interactions API is experimental and may change in future versions. You will see the following warning when using it:
UserWarning: Interactions usage is experimental and may change in future versions.
Requires google-genai>=2.0.

Installation

uv pip install "google-genai>=2.0" agno

Authentication

Set the GOOGLE_API_KEY environment variable. You can get one from Google AI Studio.
export GOOGLE_API_KEY=***

Example

from agno.agent import Agent
from agno.models.google import GeminiInteractions

agent = Agent(
    model=GeminiInteractions(id="gemini-3-flash-preview"),
    markdown=True,
)

agent.print_response("Share a 2 sentence horror story.")
View more examples here.

How It Works

  1. On the first turn, the agent sends the user message and receives a response along with an interaction_id.
  2. On subsequent turns, only the new message is sent with previous_interaction_id referencing the prior turn.
  3. The server reconstructs the full context from stored history, applying implicit caching to reduce cost.
This is transparent to the user. The Agent class handles interaction_id tracking automatically.

Capabilities

Multi-turn

Server-side history management

Thinking

Reasoning with thinking levels

Google Search

Built-in web search

Tool Use

Function calling

Structured Output

Pydantic schema enforcement

Background Execution

Long-running tasks

Multi-turn Conversations

The key advantage of the Interactions API. Prior turns are stored server-side and referenced by ID, so only the new message is sent each turn.
from agno.agent import Agent
from agno.models.google import GeminiInteractions

agent = Agent(
    model=GeminiInteractions(id="gemini-3-flash-preview"),
    add_history_to_context=True,
    markdown=True,
)

agent.print_response("My name is Alice and I love hiking in the mountains.")
agent.print_response("What did I just tell you about myself?")
agent.print_response("Suggest a hiking destination based on what you know about me.")
Read more about multi-turn conversations here.

Thinking

Enable extended reasoning with the thinking_level parameter. Accepts "low" or "high".
from agno.agent import Agent
from agno.models.google import GeminiInteractions

agent = Agent(
    model=GeminiInteractions(
        id="gemini-3-flash-preview",
        thinking_level="high",
    ),
    markdown=True,
)

agent.print_response("Explain why the sum of angles in a triangle is always 180 degrees.")
Read more about thinking here. Enable built-in Google Search by setting search=True. No external tool needed.
from agno.agent import Agent
from agno.models.google import GeminiInteractions

agent = Agent(
    model=GeminiInteractions(
        id="gemini-3-flash-preview",
        search=True,
    ),
    markdown=True,
)

agent.print_response("What are the latest developments in quantum computing?")
Read more about Google Search here.

Tool Use

Function calling works the same as with the Gemini class.
from agno.agent import Agent
from agno.models.google import GeminiInteractions
from agno.tools.websearch import WebSearchTools

agent = Agent(
    model=GeminiInteractions(id="gemini-3-flash-preview"),
    tools=[WebSearchTools()],
    markdown=True,
)

agent.print_response("Whats happening in France?")
Read more about tool use here.

Structured Output

Use Pydantic models to enforce a JSON schema on the response.
from agno.agent import Agent
from agno.models.google import GeminiInteractions
from pydantic import BaseModel, Field

class MovieReview(BaseModel):
    title: str = Field(description="The movie title")
    year: int = Field(description="Release year")
    genre: str = Field(description="Primary genre")
    rating: float = Field(description="Rating out of 10")
    summary: str = Field(description="Brief review summary")

agent = Agent(
    model=GeminiInteractions(id="gemini-3-flash-preview"),
    output_schema=MovieReview,
)

response = agent.run("Write a review of The Matrix (1999)")
Read more about structured output here.

Background Execution

For long-running tasks like Deep Research, enable background execution. The API offloads the task and returns results when complete.
from agno.agent import Agent
from agno.models.google import GeminiInteractions

agent = Agent(
    model=GeminiInteractions(
        id="gemini-3-flash-preview",
        background=True,
    ),
    markdown=True,
)

agent.print_response("Research the history of quantum computing.")

Interactions API vs generateContent

FeatureGeminiInteractionsGemini
Conversation historyServer-side, referenced by IDClient-side, resent each turn
CachingImplicit on prior turnsManual via context caching API
Token cost on multi-turnLower (only new message sent)Higher (full history resent)
Background executionSupportedNot supported
Response formatTyped execution stepsGeneric content parts

Params

ParameterTypeDefaultDescription
idstr"gemini-3-flash-preview"The model identifier
namestr"GeminiInteractions"The name of the model
providerstr"Google"The provider of the model
api_keyOptional[str]NoneGoogle API key (defaults to GOOGLE_API_KEY env var)
temperatureOptional[float]NoneControls randomness (0.0-2.0)
top_pOptional[float]NoneNucleus sampling threshold
max_output_tokensOptional[int]NoneMaximum tokens in response
stop_sequencesOptional[list[str]]NoneSequences that stop generation
seedOptional[int]NoneRandom seed for reproducibility
response_modalitiesOptional[list[str]]NoneOutput types (e.g., ["text", "image"])
storeOptional[bool]NonePersist interactions server-side (default: True)
backgroundOptional[bool]NoneOffload to background execution
thinking_levelOptional[str]NoneReasoning intensity: "low" or "high"
searchboolFalseEnable built-in Google Search
url_contextboolFalseEnable URL context extraction
code_executionboolFalseEnable code execution
service_tierOptional[str]NoneInference tier: "flex", "standard", or "priority"
timeoutOptional[float]NoneRequest timeout in seconds
client_paramsOptional[Dict[str, Any]]NoneAdditional client parameters
GeminiInteractions is a subclass of the Model class and has access to the same params.