The second brain was a write-only system all along
Five years of PKM tools, better AI, and the same abandonment pattern — here's the actual diagnosis
What the second brain promised
In 2019, Tiago Forte published the framework that became 'Building a Second Brain.' The pitch was specific: offload information from your head into an organised external system. Capture everything. Surface it when you need it. Never lose an idea again.
The framework spread across productivity communities because it diagnosed something real. Knowledge workers are overwhelmed with information, and most of it evaporates. Articles bookmarked and never read, meeting notes never referenced again, book highlights that disappeared into a Readwise queue nobody checks. Tools like Roam Research had waitlists. Obsidian's Discord server grew to tens of thousands of members. The phrase 'second brain' entered the vocabulary of anyone who'd ever saved a tab and forgotten about it.
That was five years ago. The tools are better now. Obsidian added AI-powered semantic search. Notion launched agents that draft from your notes. Mem repositioned around automatic organisation that removes the need to tag or folder anything. But ask ten people who set up a PKM system in 2021 how often they actually use it. Most of them will pause before answering.
The abandonment pattern
The abandonment pattern is consistent enough that it has a recognisable shape. It goes like this:
- Month 1: system setup. Hours spent choosing a tool, debating folders versus tags versus links, building templates. High energy.
- Month 2: active capture. Every interesting article goes in. Notes from meetings. Book highlights. Observations from code reviews.
- Month 3: slowdown. The inbox is full. Reviewing and tagging feels like a second job. The weekly review slips once, then consistently.
- Month 6: the system is a historical archive. You don't search it. You open it occasionally, feel vaguely guilty about unprocessed items, and close it.
- Month 12: you're searching your browser bookmarks for an article you definitely saved somewhere.
This is not a discipline failure. Telling people to be more consistent with their weekly review misses the root cause. The system was designed around capture. Capture is not the bottleneck.
Capture was never the bottleneck
The original second-brain framing assumed the constraint was loss of input. You read something useful, forget it, never apply it. Fix the loss, fix the problem. It was a sensible hypothesis.
But knowledge workers aren't failing because they forget things they've read. They're failing because they can't find and apply things at the moment of need. That's a retrieval problem, not a capture problem.
This distinction matters because optimising for capture makes retrieval harder. Every note you add raises the cost of searching the corpus slightly. If you've added 2,000 notes and your search and linking aren't working well, finding the right one at the right moment is harder than with 200 notes and a working memory of them.
The second-brain movement built excellent capture machinery and called it a knowledge management system. The machinery works. The management part doesn't.
Retrieval is the broken loop
A write-only system is one where writes succeed reliably and reads succeed rarely. Most PKM systems are write-only systems.
Zettelkasten, the linking-every-note approach that Niklas Luhmann used to produce 70 books and 400 articles across a 40-year career, theoretically solves the context gap by building bidirectional connections between notes. In practice, maintaining meaningful links requires 30 to 60 minutes of active processing per note. The Zettelkasten practitioners who report genuine benefit are almost universally people whose work involves producing long-form written output. The method is a research tool. It is not a general knowledge management system.
What AI actually changed (and what it didn't)
The AI integrations that shipped in 2024 and 2025 address the recall gap directly. Obsidian's AI search surfaces notes you didn't know you had, given a natural-language query. Mem's auto-organisation removes the need to tag or file anything. Notion's AI drafts from your notes without you specifying which ones.
This is a real improvement. The recall gap is genuinely smaller with semantic search across a corpus. The question of 'do I have something on this?' now costs much less to answer.
But AI didn't fix the context gap or the activation gap. Surfacing a note doesn't help when the note doesn't connect to the current situation. AI-drafted summaries of your notes sometimes produce something you could have found with a direct search, at which point the captured corpus provides no real advantage.
The honest version of what AI changed: it made large corpora of notes more useful than small ones, for the people already building large corpora. It did not fix the incentive problem for people who stopped maintaining their system by month three.
| Approach | Solves | Doesn't solve | Works for |
|---|---|---|---|
| General capture (Notion, Bear, folder-based Obsidian) | Preserving interesting things | Retrieval at moment of need | Almost no one, long-term |
| Zettelkasten linking | Context connections between notes | Time cost per note (30-60 min) | Researchers, academic writers |
| AI-powered search (Obsidian AI, Mem, NotebookLM) | Recall gap | Context gap, activation gap | People with large, maintained corpora |
| Domain-specific reference libraries | All three retrieval gaps, within one domain | Breadth across topics | Most practitioners |
The one property that predicts whether a PKM sticks
The systems that persist past year one share one property: they're useful at the moment of retrieval without any maintenance step between capture and use.
A personal wiki of command-line incantations you copy from directly. A curated set of decision frameworks you open before a large architecture call. A running document of things that broke in production, with the fix attached. A library of SQL patterns for a specific database you work with daily. Each of these has a predictable retrieval context.
You don't need to remember that you saved something. You visit the document when you're doing the related task. The note connects directly to the task, without a translation layer.
The second-brain framing pushed people toward general-purpose knowledge capture. That's the hard version of the problem. Domain-specific reference libraries, small and flat and built around recurring tasks, are the tractable version. Most people would be better served building three or four of those than maintaining one large general system they search twice a year.
“Build the system around the retrieval loop, not the capture. The captures will take care of themselves.”
What to do with this
If your PKM system is dormant, don't restart it. Archive it. Pick one context where you have recurring knowledge work — code review, system design, technical writing, hiring, incident response, whatever you do often enough to have patterns — and build a single-purpose reference document for that context.
Keep it flat. Don't organise it; search it. Add to it only when you find something you'd actually look for again. If you're not sure whether you'd look for it again, don't add it.
That's not a second brain. It's a reference library. It will stay useful longer precisely because it isn't trying to capture everything.
Five years of the experiment clarified one thing: the question was never 'which tool should I use' but 'for which retrieval context am I building this?' The tools got better. The question didn't go away.