Multimodal
Audio to text Agent
Examples
- Introduction
- Getting Started
- Agents
- Workflows
- Applications
Agent Concepts
- Multimodal
- Audio Input Output
- Audio to text Agent
- Audio Sentiment Analysis Agent
- Blog to Podcast Agent
- Multi-turn Audio Agent
- Generate Images with Intermediate Steps
- Generate Music using Models Lab
- Generate Video using Models Lab
- Generate Video using Replicate
- Image to Audio Agent
- Image to Image Agent
- Image to Text Agent
- Video Caption Agent
- Video to Shorts Agent
- RAG
- Knowledge
- Memory
- Teams
- Async
- Hybrid Search
- Storage
- Tools
- Vector Databases
- Embedders
Models
- Anthropic
- AWS Bedrock
- AWS Bedrock Claude
- Azure AI Foundry
- Azure OpenAI
- Cohere
- DeepSeek
- Fireworks
- Gemini
- Groq
- Hugging Face
- Mistral
- NVIDIA
- Ollama
- OpenAI
- Perplexity
- Together
- xAI
Multimodal
Audio to text Agent
Code
import requests
from agno.agent import Agent
from agno.media import Audio
from agno.models.google import Gemini
agent = Agent(
model=Gemini(id="gemini-2.0-flash-exp"),
markdown=True,
)
url = "https://agno-public.s3.us-east-1.amazonaws.com/demo_data/QA-01.mp3"
response = requests.get(url)
audio_content = response.content
agent.print_response(
"Give a transcript of this audio conversation. Use speaker A, speaker B to identify speakers.",
audio=[Audio(content=audio_content)],
stream=True,
)
Usage
1
Create a virtual environment
Open the Terminal
and create a python virtual environment.
python3 -m venv .venv
source .venv/bin/activate
2
Set your API key
export GOOGLE_API_KEY=xxx
3
Install libraries
pip install google-genai
4
Run Agent
python cookbook/agent_concepts/multimodal/audio_to_text.py