The problem
AI coding agents often start each session cold. They read a CLAUDE.md or AGENTS.md, inspect a few files, and rebuild context from scratch. The larger the project, the more time goes into rediscovering decisions that already happened.
That gets worse when work moves between tools. Claude Code may handle one session, Codex another, and a browser or CLI agent a third. Without a shared project memory, each tool asks the same architectural questions again: why this module exists, which design choice was rejected, what still needs human input, and what should not be touched.
LLM Code Wiki is a small repo-local answer to that problem. It gives agents a structured markdown wiki to read before they change code, then asks them to update the relevant notes after the work. The real value is not documentation volume. It is continuity: the next session starts with the project’s current map instead of a fresh guess.
What it is
LLM Code Wiki is a markdown template you drop into a project. It defines the wiki structure, page types, frontmatter, session log, open questions, and maintenance routines. Obsidian can read it as a graph, but the important consumer is the next coding agent.
The template is agent-agnostic. It works with Claude Code, Codex, Cursor, Gemini CLI, or any tool that reads markdown files and writes to disk. A symlink between AGENTS.md and CLAUDE.md lets both Claude and Codex follow the same project instructions.
The core design choice is ownership. The wiki separates what the agent should verify from source, what a human must own, and what the agent is allowed to maintain.
Source truth. Module APIs, type signatures, route files, dependency graphs, and other facts that can be read directly from code. The wiki should not duplicate these. If rg can answer the question, the note should point to the source instead of restating it.
Human decisions. Architecture choices, rejected options, product constraints, naming decisions, and unresolved questions. Agents can read and propose updates here, but they should not rewrite intent without confirmation.
Agent-maintained map. Entity pages for services and modules, concept pages for patterns, and flow notes that explain how work moves across the codebase. The agent can update this layer after it has inspected the code and made changes.
How it works in practice
You copy the template into wiki/SCHEMA.md, fill in the project name and paths, and ask the agent to bootstrap the wiki. It scans the project structure, reads package.json or pyproject.toml, checks recent git history, and creates the first map: an index page, entity pages for major modules, open questions, and decision notes where the history is clear.
From there, the wiki grows with the code. After a coding session, the agent updates the pages it touched, records new decisions, and appends a short log entry. Session startup stays light: read the index, recent log, and open questions first, then load only the pages relevant to the task.
Every page carries YAML frontmatter with a type, status, dependencies, and a last_reviewed date. Obsidian can query those fields and show the shape of the project: which concepts are central, which notes are orphaned, and which flows connect to which services.
A periodic review catches broken wikilinks, stale pages, missing entities, and contradictions between notes. Because code is machine-readable, the agent can compare parts of the wiki with the actual repository. That gives the system a useful constraint: notes should stay close to the code, and anything uncertain should be flagged rather than smoothed over.
Where it came from
Andrej Karpathy published the LLM Wiki pattern in April 2026 as a general approach to building personal knowledge bases with LLMs. The useful idea is simple: let the model maintain an interlinked wiki so knowledge can accumulate instead of being re-derived on every query.
Codebases need a stricter version of that pattern. Code has a machine-readable source of truth. Changes are visible through git. Some information should be verified from files, some should be owned by humans, and some can be maintained by the agent. Treating all of that as one flat knowledge base creates drift.
I built the code-specific extension after running into that drift across multiple projects. The first working version came from this portfolio rebuild, where the wiki tracked component status, design decisions, open questions, and build progress. Extending it to TypeScript monorepos and multi-agent workflows made the pattern more reusable.
The template also draws from Cline’s Memory Bank pattern, Microsoft’s deep-wiki plugin, the CodeWiki paper from FSoft-AI4Code, and Tobi Lütke’s qmd tool. The common thread is practical: keep useful project memory close to the repository, searchable, and reviewable.
What it shows
This project is small by design, but it points at a larger engineering problem: AI tools need operating context, not just access to files. The useful part is the boundary between source truth, human decisions, and agent-maintained notes. That boundary is what keeps the wiki helpful instead of turning it into stale documentation.
For a team, the same pattern can support onboarding, agent handoff, architecture review, and long-running feature work. The public repository keeps the template simple enough to inspect and copy. The heavier implementation belongs in the projects that use it, where frontmatter checks, review dates, and repository-specific rules can be made stricter.