Skip to main content
Build an AI agent that transforms unstructured web content into organized, structured data by combining Firecrawl’s web scraping with Pydantic’s structured output validation.

What You’ll Learn

By building this agent, you’ll understand:
  • How to integrate Firecrawl for reliable web scraping and content extraction
  • How to define structured output schemas using Pydantic models
  • How to create nested data structures for complex web content
  • How to handle optional fields and varied page structures

Use Cases

Build competitive intelligence tools, content aggregation systems, knowledge base constructors, or automated documentation generators.

How It Works

The agent extracts structured data from web pages in a systematic process:
  1. Fetch: Uses Firecrawl to retrieve and parse the target webpage
  2. Analyze: Identifies key sections, elements, and hierarchical structure
  3. Extract: Pulls information according to the Pydantic output schema
  4. Structure: Organizes content into nested models (sections, metadata, links, contact info)
The Pydantic schema ensures consistent output format regardless of the source website’s structure, with optional fields handling varied page layouts gracefully.

Code

web_extraction_agent.py
from textwrap import dedent
from typing import Dict, List, Optional

from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.tools.firecrawl import FirecrawlTools
from pydantic import BaseModel, Field
from rich.pretty import pprint


class ContentSection(BaseModel):
    """Represents a section of content from the webpage."""

    heading: Optional[str] = Field(None, description="Section heading")
    content: str = Field(..., description="Section content text")


class PageInformation(BaseModel):
    """Structured representation of a webpage."""

    url: str = Field(..., description="URL of the page")
    title: str = Field(..., description="Title of the page")
    description: Optional[str] = Field(
        None, description="Meta description or summary of the page"
    )
    features: Optional[List[str]] = Field(None, description="Key feature list")
    content_sections: Optional[List[ContentSection]] = Field(
        None, description="Main content sections of the page"
    )
    links: Optional[Dict[str, str]] = Field(
        None, description="Important links found on the page with description"
    )
    contact_info: Optional[Dict[str, str]] = Field(
        None, description="Contact information if available"
    )
    metadata: Optional[Dict[str, str]] = Field(
        None, description="Important metadata from the page"
    )


agent = Agent(
    model=OpenAIChat(id="gpt-4.1"),
    tools=[FirecrawlTools(enable_scrape=True, enable_crawl=True)],
    instructions=dedent("""
        You are an expert web researcher and content extractor. Extract comprehensive, structured information
        from the provided webpage. Focus on:

        1. Accurately capturing the page title, description, and key features
        2. Identifying and extracting main content sections with their headings
        3. Finding important links to related pages or resources
        4. Locating contact information if available
        5. Extracting relevant metadata that provides context about the site

        Be thorough but concise. If the page has extensive content, prioritize the most important information.
    """).strip(),
    output_schema=PageInformation,
)

result = agent.run("Extract all information from https://www.agno.com")
pprint(result.content)

What to Expect

The agent will scrape the target URL using Firecrawl and extract all information into a structured PageInformation object. The output includes the page title, description, features, organized content sections with headings, important links, contact information, and additional metadata. The structured output ensures consistency and makes the extracted data easy to process, store, or display programmatically. Optional fields handle pages with varying structures gracefully.

Usage

1

Create a virtual environment

Open the Terminal and create a python virtual environment.
python3 -m venv .venv
source .venv/bin/activate
2

Set your API key

export OPENAI_API_KEY=xxx
export FIRECRAWL_API_KEY=xxx
3

Install libraries

pip install -U agno openai firecrawl-py
4

Run Agent

python web_extraction_agent.py

Next Steps

  • Change the target URL to extract data from different websites
  • Modify the PageInformation Pydantic model to capture additional fields
  • Adjust the agent’s instructions to focus on specific content types
  • Explore Firecrawl Tools for advanced scraping options