The Boring Thing That Made Claude Code 10x Better
We've been using Claude Code at SparkLoop for a while now. And honestly, the thing that made the biggest difference wasn't a better model or a cleverer prompt. It was something we started doing before we even adopted AI coding tools.
We wrote wiki articles.
The Problem We Were Actually Trying to Solve
SparkLoop is a mature product. Tens of thousands of people use it. And like a lot of mature products, it had a documentation problem.
Everything was scattered. Some things lived in Google Docs. Some context was buried in PR descriptions. Some knowledge only existed in people's heads — mostly mine. Every time we onboarded a new engineer, I'd end up on two, three, four hours of calls just walking them through how things worked. Where to find stuff. Why certain things were built a certain way.
So we decided to fix that. We started writing internal wiki articles about our features. Core flows, how things connect, the reasoning behind key decisions. Nothing fancy. Just clear, structured documentation that lives right in the repository.
That was the original goal. Help humans understand the codebase faster.
Then we started using Claude Code, and something unexpected happened.
The Achilles' Heel of AI Coding
Here's the thing about LLMs: they're incredibly smart. They can reason about complex code, spot patterns, suggest elegant solutions. But they have one massive weakness.
They forget everything. Every single time.
The best analogy I've found is this: imagine hiring the smartest engineer you've ever met. They learn fast, they write great code, they understand complex systems quickly. But every morning they walk in with zero memory of what your product does, how it's built, or why you made the choices you made.
That's what working with an AI coding agent feels like without good context.
You end up spending a huge chunk of your time just re-explaining things. "Here's how our partner program works. Here's why we have two different referral flows. Here's what this webhook does." Over and over.
Wiki Articles as AI Context
When we pointed Claude Code at our wiki articles, we noticed the quality of the output changed dramatically. All of the sudden the follow-up questions were much smarter and on point.
But the interesting part is how we point it at them.
At 50+ wiki articles (and growing) it would be too cumbersome for a person to remember where everything is so instead we developed a system: rather than using plain Plan Mode, we built a custom command specifically for shaping projects. When you kick off a new feature, the command instructs Claude Code to look at the wiki documentation, find the relevant files, get the complete picture of what we're trying to build, and then ask follow-up questions before writing any code.
Most of the time, we don't even need to tell it which articles to read. The command figures that out on its own.
Here's how. This is an idea I got from Brian Casel, the creator of Agent-OS, and it's deceptively simple: we maintain an index file. It's just a YAML file with the name of each article and a one-line description of what it covers.
The insight is that the LLM only needs to scan that index to figure out which articles are relevant for a given task. Then it goes and reads just those articles. If it had to read all 50+ wiki articles every time to figure out which ones mattered, it would burn through tokens fast. The index file makes the whole thing efficient.
And here's the nice part: when we create or update a wiki article, the skill that generates it also maintains the index. So the index never falls out of sync.
The result is that Claude Code suddenly understands our domain. It knows about the relationships between different parts of the system. It understands why things are structured the way they are. The code it produces isn't just technically correct — it actually fits our product.
How We Write Them (With AI, Obviously)
Here's where it gets meta. We built a custom command — along with a skill — specifically for generating these wiki articles.
The AI reads the code and writes a draft. It's really good at figuring out the what and the how. It can trace through the codebase and explain what a feature does and how it's implemented.
What it can't do is explain the why.
Why did we build it this way? What was the business context? What trade-offs did we make? That part is still on us. So our process looks like this: the custom command generates a comprehensive draft, and then a human adds the reasoning, the context, the decisions that aren't visible in the code.
We designed these articles to be concise on purpose. Not short, necessarily, but focused. Someone should be able to read a wiki article about any feature and understand everything they need to know at a high level in 10-15 minutes. These aren't code dumps. They're explanations.
The Maintenance Problem (Solved)
There's an obvious objection here: code changes all the time. Won't these articles go stale within weeks?
This is where the same custom command saves us again. After a PR is merged, you can run it and say: "Look at what changed in this PR. What needs to be updated in the wiki docs?"
It reads the diff, cross-references the existing articles, and either updates them or flags what's outdated. The documentation maintains itself — or at least, it maintains itself with a nudge.
This was the piece that made the whole system sustainable. Writing 50 articles is a big investment. But if they rot after a month, that investment is wasted. The automated maintenance loop is what makes the whole thing work long-term.
Conclusion
What we ended up with is a flywheel:
Better wiki articles lead to better AI output. Better AI output means faster development. The AI helps write and maintain the wiki articles. Which makes the AI even more effective.
The thing that unlocked all of this wasn't a technique. It was a mental model.
We stopped thinking of AI as a magic code generator and started thinking of it as a super smart engineer... who happens to have a serious memory problem.
Once you frame it that way, the question changes. It's no longer "why isn't AI giving me good code?" It becomes "okay, you're brilliant, you read faster than anyone I've ever met, but you can't remember what we do here. How can we help you?"
And the answer turned out to be the same thing that helps every new engineer: write it down.