A daily-journal Agent that genuinely remembers across runs — not by replaying conversation history, but by summarising each session into a /memory/working.md file in a KnowledgeStore and re-injecting that file at the start of every later conversation. Two sessions are run with different Agent instances pointed at the same store; the second one (a brand-new object) recalls what the user said in the first.
"""06 · Journal companion — knowledge store with working memoryPersistent agent memory using the framework's three knowledge primitives:- ``KnowledgeStore`` — virtual filesystem for agent state.- ``WorkingMemoryAggregate`` — an LLM-driven summary rollup that runs at the end of every conversation and writes ``/memory/working.md``.- ``WorkingMemoryPolicy`` — an assembly policy that reads ``/memory/working.md`` and injects it as context at the start of every subsequent conversation.The agent therefore "remembers" what you told it even after a full restart,because the state lives in the knowledge store — not in conversationhistory.Run:: .venv-beta/bin/python 06_journal_companion.py"""importasyncioimportshutilimporttempfilefrompathlibimportPathfromautogen.betaimportAgent,KnowledgeConfigfromautogen.beta.aggregateimportAggregateTrigger,WorkingMemoryAggregatefromautogen.beta.configimportGeminiConfigfromautogen.beta.knowledgeimportDiskKnowledgeStorefromautogen.beta.policiesimportConversationPolicy,WorkingMemoryPolicydefsection(title:str)->None:print(f"\n── {title} ───")asyncdefmain()->None:config=GeminiConfig(model="gemini-3-flash-preview",temperature=0)# Use a fresh tempdir so the example is reproducible.workdir=Path(tempfile.mkdtemp(prefix="journal-companion-"))try:store=DiskKnowledgeStore(str(workdir))defbuild_agent()->Agent:returnAgent("journal",prompt=("You are a supportive daily journal companion. Keep a ""running understanding of what the user is working on. ""Be brief and reference their past entries when relevant."),config=config,knowledge=KnowledgeConfig(store=store,aggregate=WorkingMemoryAggregate(config=config),# on_end=True: roll up working memory when each conversation finishesaggregate_trigger=AggregateTrigger(on_end=True),),assembly=[WorkingMemoryPolicy(),# inject /memory/working.md on every LLM callConversationPolicy(),# then filter to conversation events],)section("Session 1 — tell the journal what you're doing")agent1=build_agent()r=awaitagent1.ask("Today I started learning to build a home espresso setup. Still ""choosing between a Silvia Pro and a Linea Mini.")print(r.body)r=awaitr.ask("Also started reading The Pragmatic Programmer. On chapter 2 about orthogonality. That's the whole update.")print(r.body)# When the `with await agent1.ask(...)` exits the `_execute`,# WorkingMemoryAggregate writes /memory/working.md.working=awaitstore.read("/memory/working.md")print()print("## /memory/working.md after session 1")print(working)section("Session 2 — new Agent instance, same store: memory persists")agent2=build_agent()r2=awaitagent2.ask("Quick check-in: what was I working on? Answer in one line.")print(r2.body)# The answer should mention espresso and/or Pragmatic Programmer# even though agent2 is a brand-new object with no prior state,# because WorkingMemoryPolicy injected /memory/working.md as a prompt.finally:shutil.rmtree(workdir,ignore_errors=True)if__name__=="__main__":asyncio.run(main())