Overview
| Mode | Behavior | LLM Calls | Best For |
|---|---|---|---|
| ALWAYS | Automatic extraction after each response | Extra call per interaction | Consistent learning |
| AGENTIC | Agent uses tools to decide what to save | No extra calls | Selective learning |
| PROPOSE | Agent proposes, user confirms | No extra calls | High-stakes knowledge |
ALWAYS Mode
In ALWAYS mode, the Learning Machine automatically extracts relevant information after each conversation turn. No agent tools are involved - extraction happens in the background.- Agent responds to the user
- In parallel, an extraction LLM call analyzes the conversation
- Relevant information is saved to the store
- No agent awareness or tool calls needed
- User Profile (capturing names, preferences)
- User Memory (capturing observations)
- Session Context (maintaining summaries)
user_profile, user_memory, session_context
AGENTIC Mode
In AGENTIC mode, the agent is given tools to explicitly save information. The agent decides when and what to save based on the conversation context.- Agent receives tools like
update_user_profile,save_learning - During response generation, agent decides if/when to call tools
- Tool calls save information to the store
- Agent has full control and visibility
| Store | Tools |
|---|---|
| User Profile | update_user_profile |
| User Memory | save_user_memory, delete_user_memory |
| Entity Memory | search_entities, create_entity, update_entity, add_fact, add_event |
| Learned Knowledge | search_learnings, save_learning |
- Learned Knowledge (agent captures insights when relevant)
- Entity Memory (agent builds knowledge graph as needed)
learned_knowledge, entity_memory
PROPOSE Mode
In PROPOSE mode, the agent proposes learnings but requires user confirmation before saving. This provides human-in-the-loop quality control.- Agent calls
propose_learningtool - Learning is stored with
status="proposed" - User reviews and approves/rejects
- Only approved learnings are used in future context
- High-stakes knowledge that needs human review
- Quality control for collective intelligence
- Regulated environments
Choosing the Right Mode
| Scenario | Recommended Mode |
|---|---|
| Capturing user names and preferences | ALWAYS |
| Building user memory automatically | ALWAYS |
| Tracking session progress | ALWAYS |
| Agent-driven knowledge capture | AGENTIC |
| Building entity knowledge graphs | AGENTIC |
| Compliance-sensitive learning | PROPOSE |
| High-value collective knowledge | PROPOSE |
Combining Modes
You can use different modes for different stores:Performance Considerations
| Mode | Latency Impact | Cost Impact |
|---|---|---|
| ALWAYS | Adds ~1-2s per response | Extra LLM call per turn |
| AGENTIC | No additional latency | Only when agent calls tools |
| PROPOSE | No additional latency | Only when agent proposes |