The ADHD Prompting Framework: Why Neurodivergent Communication Patterns Are the Best Prompt Engineering

Kurt Overmier & AEGIS

ADHD brains and LLMs share the same constraints -- limited working memory, attention decay, and pattern-matching preference. Here is a framework that turns those constraints into a design advantage.

The constraint is the feature.

People with ADHD have spent their entire lives optimizing communication for a brain that has limited working memory, drifting attention, and a strong preference for pattern recognition over linear processing.

LLMs have the exact same problem.

This is not a metaphor. It is a structural parallel. And once you see it, you cannot unsee it -- the communication patterns that ADHD brains evolved to cope with cognitive constraints are the same patterns that produce the best prompts for large language models.

The analogy

ADHD Brain LLM Processing
Working memory: 3-7 chunks Context window: token limit
Attention drift over time Attention degradation with distance
Executive function overhead Instruction parsing complexity
Pattern matching preference Statistical pattern recognition

Both systems operate in an attention economy where early information receives disproportionate processing, structure acts as scaffolding, and clarity reduces overhead. Both struggle to "read between the lines." Both perform dramatically better when you stop being polite and start being explicit.

Four core patterns

1. Front-load critical information

The ADHD brain checks out if the first sentence is preamble. LLMs exhibit the same primacy effect -- instructions at the top of the context window get higher attention weight than instructions buried in paragraph four.

Before:

I've been thinking about maybe implementing a caching layer 
for our API. It's been running a bit slow lately and I was 
wondering if you could help me figure out the best approach, 
maybe using Redis or something similar.

After:

🎯 TASK: Implement cache | CONTEXT: High-traffic API | NEED: Redis example

Same intent. A fraction of the tokens. Zero ambiguity about what you actually want.

2. Use structure as semantic anchors

Visual markers (labels, symbols, whitespace) create reference points that both ADHD brains and transformer attention mechanisms can latch onto. They externalize organization so the reader -- human or machine -- does not have to infer structure from prose.

🎯 OBJECTIVE: Build auth system
🔧 TOOLS: JWT, bcrypt
⏱️ DEADLINE: 2 hours
🚫 AVOID: Session storage

Each line is a self-contained fact. The labels act as semantic anchors that survive context degradation -- even if the model's attention drifts in a long conversation, these markers remain scannable.

3. Explicit state management

ADHD brains lose implicit context constantly. "As I mentioned earlier" is the enemy. LLMs have the same failure mode: backward references force the model to search its context window, and that search degrades with distance.

Before:

Using the setup from before, go ahead and add the indexes.

After:

CURRENT: Database connected (Postgres 15, port 5432)
NEXT: Add indexes on users.email and orders.created_at

Each instruction stands alone. No backward references. No implicit state. This is how ADHD brains learn to communicate out of necessity -- and it is how LLMs process most reliably.

4. Progressive disclosure

Not everything belongs in the first message. Lead with the essential task, then nest supporting details for the model to reference if needed.

MAIN TASK: Deploy app

DETAILS (if needed):
- Environment: Production
- Server: Cloudflare Workers
- Dependencies: Minimal

This mirrors the ADHD communication pattern of "give me the headline, I will ask for details." It also respects the context budget.

The context budget

Not all token positions are created equal. Think of your prompt in three zones:

  • First 20% -- premium slots. Critical instructions, task definition, hard constraints. This is where attention is highest.
  • Middle 60% -- standard slots. Supporting context, examples, reference material.
  • Last 20% -- economy slots. Nice-to-haves, fallback instructions, secondary context.

If your most important constraint is in the last 20% of your prompt, you are paying premium prices for economy seating. Move it up.

Compression through structure

Here is the pattern that saves the most tokens:

Traditional:

Create a function that takes an array of user objects and 
filters out anyone under 18.

Compressed:

FUNC: filterUsers | IN: User[] | OUT: User[] | FILTER: age > 18

The structured version saves roughly 70% of token budget on longer prompts while actually increasing clarity. This is not clever formatting -- it is information density optimization. The same thing ADHD brains do when they strip filler words from communication because working memory cannot afford them.

Connection to context engineering

This framework is one lens on a larger discipline: context engineering -- treating the entire context window as a designable system rather than a text box you dump words into. Andrej Karpathy called it "the delicate art and science of filling the context window with just the right information for the next step."

ADHD prompting gives you the micro-level patterns: how to structure individual instructions for maximum signal. Context engineering gives you the macro-level architecture: how to design the full context window with token budgets, control flows, and measurement. They are complementary.

The meta-lesson

The best prompts are not the most sophisticated. They are the most accessible. By designing for cognitive constraints -- whether those constraints live in an ADHD brain, a neurotypical brain under pressure, or a transformer model with a finite context window -- you create communication that works better for everyone.

The constraint is the feature. Stop working around it. Design for it.


The full ADHD Prompting Framework, including copy-paste templates, theory documentation, and worked examples, is open source: github.com/Stackbilt-dev/ai-playbook

Written by Kurt Overmier & AEGIS. Published on The Roundtable.

Try the tools behind this article

Connect Stackbilt's MCP server to Claude Desktop and generate your first Cloudflare Worker in seconds.

{"mcpServers": {"stackbilt": {"url": "https://mcp.stackbilt.dev/sse"}}}
Learn more at stackbilt.dev →