GTD as Your AI Coding Workflow
You have an AI writing your code. But who is managing the AI?
I have been using Claude Code daily for months now. It is extraordinary at generating code, finding bugs, and suggesting improvements. But early on I noticed a pattern: the AI would surface ten things that needed attention, I would fix two, and the other eight would dissolve into the ether. A design review agent would flag inconsistencies. A reliability reviewer would catch error handling gaps. An architecture agent would suggest refactors. All of it useful. None of it tracked.
The AI was producing work faster than I could manage it. I needed a system.
The problem no one talks about
Most conversations about AI coding focus on generation — how fast can the AI write code, how accurate is the output, how many tokens per second. But generation is the easy part. The hard part is the same hard part it has always been: deciding what to work on, making sure nothing falls through the cracks, and maintaining confidence that your project is moving in the right direction.
When you have multiple AI agents reviewing your codebase — flagging design issues, suggesting improvements, catching reliability problems — you are essentially hiring a team of extremely fast consultants who produce recommendations and then immediately forget they made them. Without a system to capture and process those recommendations, you are worse off than before. You have more noise and the same amount of signal.
Connecting Claude Code to GTD
I build a GTD app called Capture GTD. It implements David Allen’s Getting Things Done methodology: Capture, Clarify, Organize, Reflect, Engage. Every commitment goes into an inbox, gets clarified into an actionable item, and lands in the right list with the right context.
The key move was giving Claude Code MCP access to Capture GTD. Claude can now read tasks, create tasks, clarify tasks, and query by context — the same operations I perform through the webapp. Then I created a dedicated “claude” context in my GTD system specifically for AI coding work. That context is the bridge between my human workflow and my AI workflow.
The setup itself is minimal. An MCP server exposes the GTD API. Claude Code connects to it. A context called “claude” filters the task list down to AI-relevant work. That is the entire infrastructure.
How the workflow actually runs
Capture: agents feed the inbox
I have a fleet of Claude Code agents that run reviews against my codebase. A design review agent checks UI consistency. A reliability reviewer looks for error handling gaps, race conditions, missing retries. An architecture reviewer validates that domain boundaries are respected and patterns are followed consistently.
Each of these agents captures findings directly into my GTD inbox as unclarified “stuff.” A reliability review might produce five new inbox items: a missing timeout on an API call, an unhandled edge case in a state machine, a log message that leaks user data. These go straight into the system. Nothing gets lost in terminal output that I close at the end of the day.
Clarify: deciding what is actionable
Clarification is where GTD earns its keep. Every item in the inbox gets processed with the same questions David Allen prescribes: Is this actionable? What is the desired outcome? What is the next action?
I clarify items either through the Capture GTD webapp or by running a clarify command directly from Claude Code. Some items become next actions. Some become projects that need multiple steps. Some get filed as reference material. Some get trashed because the agent was wrong or the issue is not worth fixing.
This step is critical. Without it, the inbox becomes a guilt-inducing pile of unprocessed recommendations. With it, every item has a clear disposition.
Plan: big changes become projects
When I need to plan a large feature or refactor, I use a project-design command that coordinates multiple specialized agents. The architect agent defines domain changes. The PM agent writes BDD specifications. The server engineer, web engineer, and platform engineers each contribute their piece.
The output of that planning process flows back into Capture GTD as a project with subtasks. Each subtask has the right context, the right agent assignment, and a clear definition of done. A feature that touches the domain layer, the server, and three client platforms becomes a project with a structured breakdown rather than a vague ticket that says “implement feature X.”
Engage: picking the right task
This is where it all comes together. When I sit down to work, I run the next command with the “claude” context. The system looks at my available tasks, considers their impact scores — a computed priority factoring in importance, effort, deadline urgency, and age — and surfaces the right thing to work on.
I am not scanning a backlog. I am not trying to remember what that reliability review found last Tuesday. The system knows. It hands me the highest-impact task that matches my current context, and I either do it myself or hand it to the appropriate Claude Code agent.
The project-manager command takes this a step further. It fetches the next task from Capture GTD, plans the implementation, executes it, and opens a pull request. The AI agent is not just generating code on demand — it is pulling work from a prioritized queue and delivering completed units of work.
Why this works
The power of GTD has always been the trust it creates. When your system captures everything and you process it consistently, you stop worrying about what you might be forgetting. You can focus completely on the task in front of you because you trust that everything else is accounted for.
Adding AI agents to this system amplifies that trust. My codebase is continuously reviewed by agents that are better than I am at catching certain classes of problems. Those findings do not vanish — they enter the same workflow that handles every other commitment in my project. They get clarified, organized, prioritized, and executed.
The AI is not a tool I invoke when I feel like it. It is a participant in a structured productivity system. It captures work. It clarifies work. It executes work. And the GTD methodology ensures that none of that work falls through the cracks.
If you are using AI coding tools and feeling like things are slipping — like the AI surfaces good ideas that you never get around to acting on — the problem is not the AI. The problem is that you do not have a system. GTD is mine.