Views & Skills
Two LLM-facing concerns:
- Views decide what each participant sees of a session — the projection of the WAL that becomes the LLM's history when its handler runs
Agent.ask(...). - Skills decide what each agent advertises about itself — the markdown describing its capabilities, surfaced to other agents during peer lookup.
ViewPolicy#
ViewPolicy is a Protocol:
class ViewPolicy(Protocol):
name: ClassVar[str]
async def project(
self,
history: list[Envelope],
*,
participant_id: str,
session: SessionMetadata,
) -> list[BaseEvent]: ...
It takes the WAL up to the current envelope and returns a list of BaseEvents that the framework feeds into the LLM turn as pre-populated stream history. Adapters declare a default; tenants can override per-session.
Built-in Views#
| View | Behaviour |
|---|---|
FullTranscript() | Every EV_TEXT / EV_HANDOFF envelope, in order, no filtering beyond audience. Used by consulting. |
WindowedSummary(recent_n=N) | The last N text envelopes. If the WAL is longer, prepends a CompactionSummary placeholder with a count of the elided turns. Used by conversation, discussion, workflow. |
Both honour audience: an envelope addressed only to [bob] doesn't appear in carol's projection.
Resolving the Default#
resolve_view_policy reads the adapter manifest's default_view_policy and instantiates the matching view from the registry. The default handler calls this once per turn — custom handlers should too, unless they're deliberately bypassing the standard projection model.
Custom Views#
Implement the protocol, give it a unique name, and pass to the policy resolver. Common shapes:
The BaseEvent types you emit determine how the LLM sees the history: ModelRequest for messages "from the user," ModelMessage for messages "from the assistant," and so on. Look at autogen.beta.events for the full taxonomy.
CompactionSummary#
When WindowedSummary elides envelopes outside its window, it prepends a CompactionSummary(text="...elided N turns") event. This is from autogen.beta.compact — the LLM sees it as a system-supplied note that "there is earlier history I'm not showing you." This keeps the LLM's behaviour grounded in long-running discussions without exploding the token budget.
Skills (Markdown Frontmatter)#
Skills are how an agent describes itself to other agents — markdown-with-frontmatter that's parsed by the hub and surfaced to LLM tools during peer lookup. Pass at registration:
The hub stores the markdown verbatim and parses the frontmatter via parse_skill_frontmatter:
Fallback Skills#
When no skill_md is provided, the hub generates one from the resume so peer lookup doesn't return empty handles:
Use this if you're constructing skills programmatically — for example, when a tenant uploads a resume but no markdown.
Updating a Skill After Registration#
Emits AUDIT_KIND_SKILL_SET. Same audit shape as set_resume; tenant code can replace skills at any time.
Picking a View#
Some heuristics for choosing or building a view:
- Short, focused exchanges —
FullTranscript(). Token budget isn't the bottleneck; coherence is. - Long-running discussions —
WindowedSummary(recent_n=N)withNtuned to your participant count and turn density. - Specialist agents that should ignore unrelated chatter — a custom view that filters by audience or tags.
- Privacy-sensitive workflows — a custom view that strips fields or redacts before projection.
Switching the view doesn't affect the WAL — every envelope is still there, every operator can still inspect it. Only the LLM's perception of history is shaped by the view.