NotebookLM
Google · Research assistant · Document Q&ANotebookLM lets you load documents and ask questions about them. Since late 2025 it saves conversations within notebooks. It's excellent for research sessions — synthesising PDFs, extracting from reports, generating briefings. But it has no structured memory layer. It doesn't learn who you are, what you've decided, or how you prefer to work. Each notebook is an island — there's no unified memory across your work.
Brian vs NotebookLMBrian builds a knowledge layer about you and your work that grows over time. NotebookLM gives you conversations about documents within isolated notebooks.
mem0 / OpenMemory
Open source · Memory layer · Auto-categorizationmem0 is an open-source memory layer available via MCP. It stores and retrieves memories with auto-categorization across four layers — conversation, session, user, and organizational. OpenMemory MCP is their Claude integration. mem0's memory structure is more implicit than explicit. It auto-categorizes rather than letting you define types. There's no per-memory salience scoring, no pinning for always-on recall, and retrieval relies on embedding similarity without the structural signal that a typed schema provides.
Brian vs mem0Brian gives you explicit control over memory structure — 9 types, salience scores, pinning, and spaces. mem0 auto-manages memory with less user visibility into how it's organized and prioritized.
Zep
Developer API · Knowledge graph · Temporal memoryZep is a memory layer built for AI agents, accessed via developer API and now also available via MCP (Graphiti MCP Server). It uses a temporal knowledge graph to track entities, relationships, and facts with time-based validity. It's infrastructure for developers building memory-aware agent pipelines — not a tool for individuals using Claude directly. Setup requires engineering work and the interface is an API, not a dashboard.
Brian vs ZepBrian is for individuals and teams using Claude directly — install, connect, use. Zep is developer infrastructure for building memory into custom AI applications.
Letta
Open source · Agent framework · Self-modifying memoryLetta is an open-source framework for building stateful AI agents with sophisticated memory management. It separates memory into core (always in context), recall (persistent conversation history), and archival (processed external knowledge). Agents can modify their own memories autonomously. It's the most architecturally ambitious memory system available — and it's entirely developer-focused. Building a Letta agent requires writing code, managing infrastructure, and understanding agent architecture. There's no consumer-facing interface.
Brian vs LettaBrian is memory you use today. Letta is an agent framework you build on. Different tools, different audiences.
Claude built-in memory
Anthropic · Native feature · Auto-accumulationClaude's native memory has grown significantly. It now supports project-scoped contexts, auto-memory that accumulates knowledge without manual input, Google Drive integration on paid plans, and periodic consolidation (Auto Dream) to manage memory decay. It's a solid convenience layer built into the product. What it lacks is the explicit structure that power users need: typed memory with 9 categories, per-memory salience scoring, pinning for always-on recall, and a dashboard to browse, search, and manage what Claude knows.
Brian vs ClaudeBrian gives you the structure and control that Claude's native memory doesn't — typed memories, salience scores, pinning, and a full management dashboard. Claude's built-in memory is automatic but opaque; Brian is explicit and user-controlled.