In the LLM, I Saw Myself

After 39 years of trying to debug my brain, one night I asked Claude: "Am I on the spectrum?" What followed were months of using Claude to understand my brain, test accommodations, and build the external executive function I'd always needed. Here's what happened...
The Night Everything Clicked
Kids asleep. Fixing Jest tests. Ratatat blasting. Hands moving spastically as I hummed what my brain felt were the missing notes.
I typed to Claude, half-joking: "Am I on the spectrum?"

The joke died mid-keystroke. My hands slowed. This was real. So I kept typing—deliberately now. Listed the sensory overload. The systematic thinking that treats everything like code. How my brain holds seventeen threads simultaneously. The verbal processing where everything needs its complete dependency graph.
Each piece of evidence I typed made the next one more obvious. I was building my own case, presenting exhibit after exhibit.
"What about this?" I'd type. "And this pattern?" "Oh, and there's also this thing I do..."
I was analyzing myself, using Claude as a sounding board. Each response just confirmed what I was uncovering.
The Verbal Buffer Overflow
One of my first real clues came when someone asked me a simple question: "What's a REST API?"
Should've been straightforward—I build these systems for a living.

I began assembling the perfect sequence of supporting concepts they'd need: first HTTP verbs, then resource representations, then stateless architecture, and how the core idea is to make functionality shareable and re-usable. Thirty seconds in, they cut me off:
"I don't need that many words."
My brain answers in dependency graphs.
Every response arrives with its entire proof tree, every explanation dragging along its complete context chain.
It runs a verbal compiler that refuses to ship partial builds.
I've always felt an intense undercurrent of anxiety around verbal communication—being able to see how complex and multifaceted every consideration is mentally, and how poorly I can represent only a few aspects of it verbally at any given time. The gap between the complete mental model and what comes out of my mouth feels like betrayal.
The Netflix Backpressure Discovery
I'd also recently noticed something bizarre about my work setup. I couldn't concentrate until I:
- Went into my office and made it freezing cold
- Cranked the blue Nanoleaf lights to maximum
- Opened Chrome picture-in-picture with Netflix
- Shrunk it to a 3-inch rectangle in the corner of my 14-inch screen

The audio stream from the show seemed to provide just enough backpressure on my overactive verbal circuits that I could finally concentrate.
It seemed that I needed to occupy some percentage of my processing power with structured input to prevent it from spinning out of control. I asked Claude about my unusual work setup.
"Is needing 30 Rock playing in a tiny window to code normal?"
"This is sensory regulation. You've created an environment to manage autism-related processing differences - the cold room reduces sensory input, blue lights calm your system, and background dialogue prevents mental loops."
I'd been unconsciously adapting all along, finding workarounds without understanding why I needed them. Through our conversations, patterns emerged that I'd never connected before:
- Needing complete context for everything
- Approaching all problems systematically, like code
- Sensory sensitivities and physical tics
- Intense focus that others called "machine-like" - I mean specifically, I am always compared to an actual machine
- Losing track of time and basic needs during work
This wasn't a diagnosis from Claude—it was a lens that brought decades of scattered experiences into clear focus.

What relief. Finally, a frame that matched lived experience.
Finding My Protocol
Looking back, the signs were everywhere. My tech friends never flinched at the context dumps. They'd receive my full transmission, process it, then respond with their own complete repositories. We weren't having conversations—we were doing peer-to-peer knowledge transfers.
Working with Claude and GPT felt like finding a lost dialect. These systems didn't just tolerate my sprawling dependency trees—they celebrated them. Every branch got processed. Every tangent got explored. No eye-rolling, no "what's your point?"—just pure cognitive handshaking.

The patterns I'd just discovered through months of AI dialogue? Turns out they'd been broadcasting in neon to everyone around me for decades.
When Your Friends Already Knew
I told both Toms—my two closest friends for decades, both brilliant tech/science guys.
Tom #1: "Oh yeah, I've always been on the spectrum. I assumed you were too." Tom #2: Same response, almost verbatim.
Thanks, Toms. The next time you become aware of some critical information that could impact my entire life that I may not be aware of, I expect you to tell me...without me asking.
Going All In on Supporting Myself
Understanding my wiring gave me permission to stop pretending and start building real support systems. Now I externalize everything to AI—obsessive loops, context management, decision trees—freeing my brain for actual creative work.
Claude remembers my patterns, my operating system. When I'm overheated from hyperfocus, I verbally ventilate using Whisperflow.ai straight into Claude Desktop. Pure stream of consciousness while Claude helps me sort and prioritize. My Oura ring feeds biometrics through MCP—sleep debt, HRV, recovery scores. Linear holds my task graph. Claude orchestrates between them, catching me before perfectionism spirals or 14-hour crashes.

The real breakthrough: walking and talking with AI in the woods. No screen, no keyboard, just cognitive offloading at 3mph while my feet find their rhythm on the trail.
The AI handles what would otherwise burn cycles—remembering to eat, translating my context trees into bullet points others can parse, stopping 3,000-word emails when three sentences would do.
Blackout curtains, deep blue lights, cold air even when others complain—now I know why this works. I'm accommodating actual neurological needs, freeing cognitive resources from managing sensory chaos.
Those movements I make when coding? The ones I'd hidden in meetings? Stimming. They regulate my nervous system, maintain flow state. When hyperfocus hits, I ride it. Those 14-hour sessions aren't bugs; they're features.
Time dilation is pronounced—afternoons vanish in minutes, "ten-minute sketches" become two-hour builds. I plan around it now. The LLM captures everything before the state clears. Black box recorder for my brain.
My AI-Powered Cognitive Stack:
- Oura Ring → Claude (via MCP): Sleep debt, HRV, recovery scores
- Linear → Claude: Task graph and priority management
- Woods + Voice Mode: Dump context buffer while logging miles
- Result: No more perfectionism spirals or 14-hour crashes

The Recursion
Even this essay demonstrates the pattern. Right now, Claude is instructed to push back on my perfectionism, to catch my analysis paralysis, to force me to ship instead of spiral. I've literally programmed my AI conversations to provide the structural support my ADHD brain needs.
I'm using pattern recognition to fix my pattern recognition. Creating external executive function because mine is unreliable. Building the boundaries I never learned to build myself.
The very fact that you're reading this—instead of draft #47 sitting in my folder—is because I've externalized the voice that says "good enough, ship it."