Multimodal
Image to Audio Agent
Examples
- Introduction
- Getting Started
- Agents
- Workflows
- Applications
Agent Concepts
- Multimodal
- RAG
- Knowledge
- Memory
- Teams
- Async
- Hybrid Search
- Storage
- Tools
- Vector Databases
- Embedders
Models
- Anthropic
- AWS Bedrock Claude
- Azure OpenAI
- Cohere
- DeepSeek
- Fireworks
- Gemini
- Groq
- Hugging Face
- Mistral
- NVIDIA
- Ollama
- OpenAI
- Together
- Vertex AI
- xAI
Multimodal
Image to Audio Agent
Code
from pathlib import Path
from agno.agent import Agent, RunResponse
from agno.media import Image
from agno.models.openai import OpenAIChat
from agno.utils.audio import write_audio_to_file
from rich import print
from rich.text import Text
image_agent = Agent(model=OpenAIChat(id="gpt-4o"))
image_path = Path(__file__).parent.joinpath("sample.jpg")
image_story: RunResponse = image_agent.run(
"Write a 3 sentence fiction story about the image",
images=[Image(filepath=image_path)],
)
formatted_text = Text.from_markup(
f":sparkles: [bold magenta]Story:[/bold magenta] {image_story.content} :sparkles:"
)
print(formatted_text)
audio_agent = Agent(
model=OpenAIChat(
id="gpt-4o-audio-preview",
modalities=["text", "audio"],
audio={"voice": "alloy", "format": "wav"},
),
)
audio_story: RunResponse = audio_agent.run(
f"Narrate the story with flair: {image_story.content}"
)
if audio_story.response_audio is not None:
write_audio_to_file(
audio=audio_story.response_audio.content, filename="tmp/sample_story.wav"
)
Usage
1
Create a virtual environment
Open the Terminal
and create a python virtual environment.
2
Set your API key
export OPENAI_API_KEY=xxx
3
Install libraries
pip install -U openai rich agno
4
Run Agent