Skip to main content
A useful agent gets sharper the more you use it: generic patterns at first, your conventions after a few weeks of use, eventually anticipating what you need. That compounding behavior comes from LearningMachine. Four agents in the demo use it.
AgentWhat it learns
PalWhat you’ve ingested, how you write, who’s in your network, recurring meetings
DashSchema gotchas, validated query patterns, error fixes that worked
ContactsPeople, relationships, events, your communication preferences
InvestmentPast investment decisions, framework refinements, pattern recognition

The shape

from agno.learn import (
    LearningMachine, LearningMode,
    UserProfileConfig, EntityMemoryConfig, SessionContextConfig,
    LearnedKnowledgeConfig,
)

contacts = Agent(
    learning=LearningMachine(
        user_profile=UserProfileConfig(mode=LearningMode.ALWAYS),
        entity_memory=EntityMemoryConfig(
            mode=LearningMode.AGENTIC,
            enable_create_entity=True,
            enable_add_fact=True,
            enable_add_relationship=True,
            enable_add_event=True,
        ),
        session_context=SessionContextConfig(
            mode=LearningMode.ALWAYS,
            enable_planning=True,
        ),
    ),
    ...
)
Four learning stores, each opt-in:
StoreHoldsMode
User profilePreferences, role, working styleALWAYS (auto-extracted every run)
Entity memoryPeople, projects, relationships, eventsAGENTIC (agent decides when to write)
Session contextPlans and structure for the current sessionALWAYS
Learned knowledgeDiscovered patterns the agent wants to rememberAGENTIC
LearningMode.ALWAYS runs the extractor on every turn. LearningMode.AGENTIC gives the agent tools to write learnings when it judges them worth keeping. See Learning Modes.

Why this beats vanilla memory

Vanilla enable_agentic_memory=True gives you one bucket. The agent dumps facts into it. Useful, but flat. LearningMachine separates concerns:
  • User profile is “this is who I’m talking to” (one record, updated in place).
  • Entity memory is “this is who/what they’re talking about” (graph of nodes and edges).
  • Session context is “this is the plan for this conversation” (scoped to one session).
  • Learned knowledge is “this is something I want to remember for next time” (general patterns).
Each store has its own retrieval path. The agent gets the right slice of memory at the right point in the run.

Dash’s learning loop

Dash uses learning differently. The Engineer agent runs SQL, hits errors, diagnoses them, and saves the fix as learned knowledge:
from agno.learn import LearnedKnowledgeConfig, LearningMode

dash_learnings = Knowledge(...)

dash_engineer = Agent(
    learning=LearningMachine(
        knowledge=dash_learnings,
        learned_knowledge=LearnedKnowledgeConfig(mode=LearningMode.AGENTIC),
    ),
)
The agent gets save_learning(title, learning) and search_learnings(query) tools. Once. The next time the same error pattern shows up, it pulls the fix from learnings instead of rediscovering it. This is what makes the Dash team self-improving. After a few weeks of use, the same questions take fewer iterations because the right-shaped learnings are already in the store.

See it in action

AgentTry in chat
Contacts”Sarah Chen joined Acme as VP Engineering last month.” → entity gets created, relationship added
Contacts”What do I know about the Acme team?” → entity memory query, returns people + facts + events
DashFirst time: “why is churn high?” → diagnoses, saves learning. Second time: same question, faster, references the saved pattern
Pal”I prefer concise responses.” → user profile updated; future responses get terser
Source: agents/contacts/, agents/dash/, Learning docs

Next

MCP →