Examples
- Examples
- Getting Started
- Agents
- Teams
- Workflows
- Applications
- Streamlit Apps
- Evals
Agent Concepts
- Reasoning
- Multimodal
- RAG
- User Control Flows
- Knowledge
- Memory
- Async
- Hybrid Search
- Storage
- Tools
- Vector Databases
- Context
- Embedders
- Agent State
- Observability
- Miscellaneous
Models
- Anthropic
- AWS Bedrock
- AWS Bedrock Claude
- Azure AI Foundry
- Azure OpenAI
- Cerebras
- Cerebras OpenAI
- Cohere
- DeepInfra
- DeepSeek
- Fireworks
- Gemini
- Basic Agent
- Streaming Agent
- Agent with Structured Outputs
- Agent with Tools
- Agent with Storage
- Agent with Knowledge
- Image Agent
- Flash Thinking Agent
- Audio Input (Bytes Content)
- Audio Input (Upload the file)
- Audio Input (Local file)
- Agent with PDF Input (Local file)
- Agent with PDF Input (URL)
- Video Input (Bytes Content)
- Video Input (File Upload)
- Video Input (Local File Upload)
- Groq
- Hugging Face
- IBM
- LM Studio
- LiteLLM
- LiteLLM OpenAI
- Meta
- Mistral
- NVIDIA
- Ollama
- OpenAI
- Perplexity
- Together
- XAI
- Vercel
Gemini
Basic Agent
Code
cookbook/models/google/gemini/basic.py
from agno.agent import Agent, RunResponse # noqa
from agno.models.google import Gemini
agent = Agent(model=Gemini(id="gemini-2.0-flash-exp"), markdown=True)
# Get the response in a variable
# run: RunResponse = agent.run("Share a 2 sentence horror story")
# print(run.content)
# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story")
Usage
1
Create a virtual environment
Open the Terminal
and create a python virtual environment.
python3 -m venv .venv
source .venv/bin/activate
2
Set your API key
export GOOGLE_API_KEY=xxx
3
Install libraries
pip install -U google-genai agno
4
Run Agent
python cookbook/models/google/gemini/basic.py
Was this page helpful?
Assistant
Responses are generated using AI and may contain mistakes.