import { Callout } from ‘fumadocs-ui/components/callout’;
Andrej Karpathy recently published a pattern called LLM Wiki. It went viral. The core idea: instead of re-deriving knowledge from raw documents on every query (the RAG approach), have an LLM incrementally build and maintain a persistent wiki. The knowledge compounds. Cross-references accumulate. Contradictions get flagged. The human curates sources and asks questions; the LLM does all the bookkeeping.
We’ve been building exactly this since 2025. Not because we read the gist (it didn’t exist yet), but because we ran into the same problem from a different direction: AI tools forget everything between sessions, and the maintenance burden of keeping a knowledge base current is what kills it.
This post maps LLM Wiki’s concepts to what Nowledge Mem already ships, explores where the pattern can go further, and shares what we’re building next.
The diagnosis is right
Karpathy identifies the core problem precisely:
The tedious part of maintaining a knowledge base is not the reading or the thinking. It’s the bookkeeping.
We agree completely. Cross-references, version tracking, contradiction detection, synthesis, keeping everything consistent as knowledge grows. Humans abandon wikis because the cost of maintenance outpaces the value. LLMs don’t get bored with bookkeeping. That insight is exactly right.
Where we diverge is the execution model. LLM Wiki is a manual workflow: you drop a source, tell the LLM to process it, guide the extraction, check the results. It works, and for many use cases it’s the right level of control. But what if the wiki maintained itself?
Mapping LLM Wiki to Nowledge Mem
LLM Wiki has three layers: raw sources, the wiki, and the schema. Three operations: ingest, query, lint. Here’s how they map.
Ingest
In LLM Wiki, you drop a source and tell the LLM to process it. The LLM reads it, writes summary pages, updates entity pages, notes contradictions, appends to a log. A single source might touch 10-15 wiki pages.
In Nowledge Mem, knowledge enters from multiple channels simultaneously. Your coding sessions in Claude Code and Cursor auto-sync. The browser extension captures insights from ChatGPT, Gemini, and Claude conversations as they happen. You paste a URL into Timeline and it gets parsed, chunked, and indexed. You drop a PDF into the Library. Each entry point feeds the same knowledge system.
The key difference: you don’t have to remember to ingest. Capture happens through the tools you already use, in the background.
When a new memory arrives, Background Intelligence kicks in. Entity extraction identifies people, technologies, concepts. EVOLVES detection checks whether this new piece replaces, enriches, confirms, or challenges something you already know. If enough memories converge on the same topic, a Crystal forms: a synthesized reference page built from three or more sources.
One source can touch dozens of connections. You don’t ask it to. It just does.
Query
LLM Wiki’s query operation searches the wiki, synthesizes an answer, and files valuable analyses back as new pages.
Nowledge Mem’s search pipeline runs six strategies in parallel: semantic embeddings, full-text matching, entity graph traversal, community clusters, label filtering, and relationship-edge walking. Fast mode returns results in under 100ms. Deep mode adds LLM re-ranking for complex temporal or multi-hop queries.
The query-to-knowledge loop is where things get interesting. In our Graph Intelligence Agent, you can select nodes on your knowledge graph, ask questions about them, and the agent reasons across your entire knowledge base with 26 specialized tools: path finding, bridge node discovery, EVOLVES chain tracing, community analysis, PageRank computation. The agent’s analysis shows up as step-by-step visual highlights on the canvas, and its findings can be saved back as Crystals or reports.
This is the LLM Wiki “query” operation, but interactive and visual. You’re not just searching text files. You’re exploring a live graph of everything you know.
Lint
LLM Wiki suggests periodically asking the LLM to health-check the wiki: find contradictions, stale claims, orphan pages, missing cross-references.
In Nowledge Mem, this runs automatically. Background Intelligence includes 13 tasks: 7 on schedules, 6 triggered by events. Contradiction detection, staleness checking, and community discovery all happen without you asking. Results show up as Flags in your Timeline: “Your March assessment contradicts your October conclusion.” “This deployment guide was superseded by newer notes.”
The daily Working Memory briefing (~/ai-now/memory.md) is the equivalent of LLM Wiki’s index.md and log.md combined. It surfaces active topics, unresolved flags, recent changes, and priorities. Every connected tool reads it at session start.
What changes when the wiki runs itself
LLM Wiki describes a workflow where the human directs and the LLM executes. That’s a good model. But we’ve found that removing manual steps changes the economics of the system:
Capture becomes ambient. You don’t think about what to save. Browser extension captures while you chat. Coding sessions auto-sync. URL parsing is instant. The cost of adding knowledge to the system approaches zero.
Maintenance becomes continuous. LLM Wiki suggests periodic lint passes. Nowledge Mem runs contradiction detection, evolution tracking, and crystal synthesis on event triggers and daily schedules. The wiki never gets stale because it never waits for you to clean it up.
Knowledge works across tools. LLM Wiki is typically one person, one LLM agent, one Obsidian vault. Nowledge Mem sits between you and every AI tool. Save a decision in Claude Code. Tomorrow, Cursor finds it. Next week, Gemini CLI uses it to inform a code review. The Working Memory briefing gives every tool the same starting context.
The graph does what index.md can’t. LLM Wiki uses a manually-maintained index file for navigation. At scale, this breaks. Nowledge Mem’s knowledge graph handles this structurally: entity nodes, relationship edges, community clusters, and EVOLVES chains. Searching “distributed systems” finds your memory about “Node.js microservices” because they share entity connections, not because someone remembered to update an index.
What we’re still learning from the pattern
The gist gets several things right that inform our roadmap:
Deliberate ingest has value. Our current Library pipeline is fire-and-forget: drop a file, it gets indexed. But sometimes you want to sit with a source, guided extraction: “This paper’s main contribution is X, which relates to my earlier decision about Y.” We’re building a “Study” mode where you can interactively explore a source with the agent before extracting knowledge.
Browsable knowledge pages matter. LLM Wiki users open Obsidian and see their knowledge as readable pages. Our Graph Inspect view shows database fields. We’re redesigning it as rich Knowledge Pages: select an entity and see a rendered page with all connected memories, EVOLVES timeline, Crystal appearances, and community context.
Portability is trust. “The wiki is just a git repo of markdown files.” That’s powerful. We’re building wiki export: generate a folder of interconnected markdown files from your knowledge graph, with Obsidian-compatible wikilinks and YAML frontmatter. Your knowledge should be portable.
If you want to set up your own LLM Wiki today
If Karpathy’s gist resonated with you, here are two paths:
The manual path: Follow the gist. Set up a folder structure, configure CLAUDE.md or AGENTS.md, use Obsidian for browsing. You’ll learn a lot about what works for your domain. It’s a good starting point for people who want full control over every step.
The automated path: Install Nowledge Mem. Save your first memory. Connect your primary AI tool. The system handles the rest: entity extraction, evolution tracking, contradiction detection, crystal synthesis, daily briefings. You keep the same human role Karpathy describes (curating sources, asking questions, thinking about what it all means) without the manual bookkeeping.
Both paths lead to the same place: knowledge that compounds instead of decays. The question is whether you want to build the plumbing yourself or start using it today.
Further reading
- Knowledge evolution: How EVOLVES chains track the way your understanding changes
- Search architecture: Six strategies that replace
index.md - Background Intelligence: The 13 tasks that keep your knowledge current
- Crystals: Synthesized reference pages from converging sources
- Memory decay: How freshness and confidence scoring keep search relevant