The YouTube Reader allows you to extract transcripts from YouTube videos synchronously and convert them into vector embeddings for your knowledge base.

Code

examples/concepts/knowledge/readers/youtube_reader_sync.py
from agno.agent import Agent
from agno.knowledge.knowledge import Knowledge
from agno.knowledge.reader.youtube_reader import YouTubeReader
from agno.vectordb.pgvector import PgVector

db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"

# Create Knowledge Instance
knowledge = Knowledge(
    name="YouTube Knowledge Base",
    description="Knowledge base from YouTube video transcripts",
    vector_db=PgVector(
        table_name="youtube_vectors", 
        db_url=db_url
    ),
)

# Add YouTube video content synchronously
knowledge.add_content(
    metadata={"source": "youtube", "type": "educational"},
    urls=[
        "https://www.youtube.com/watch?v=dQw4w9WgXcQ",  # Replace with actual educational video
        "https://www.youtube.com/watch?v=example123"   # Replace with actual video URL
    ],
    reader=YouTubeReader(),
)

# Create an agent with the knowledge
agent = Agent(
    knowledge=knowledge,
    search_knowledge=True,
)

# Query the knowledge base
agent.print_response(
    "What are the main topics discussed in the videos?",
    markdown=True
)

Usage

1

Create a virtual environment

Open the Terminal and create a python virtual environment.
python3 -m venv .venv
source .venv/bin/activate
2

Install libraries

pip install -U youtube-transcript-api pytube sqlalchemy psycopg pgvector agno openai
3

Set environment variables

export OPENAI_API_KEY=xxx
4

Run PgVector

docker run -d \
  -e POSTGRES_DB=ai \
  -e POSTGRES_USER=ai \
  -e POSTGRES_PASSWORD=ai \
  -e PGDATA=/var/lib/postgresql/data/pgdata \
  -v pgvolume:/var/lib/postgresql/data \
  -p 5532:5432 \
  --name pgvector \
  agno/pgvector:16
5

Run Agent

python examples/concepts/knowledge/readers/youtube_reader_sync.py

Params

ParameterTypeDefaultDescription
video_urlstrRequiredURL of the YouTube video to extract transcript from